Test Report: KVM_Linux_containerd 22047

                    
                      4655c6aa5049635fb4cb98fc0f74f66a1c57dbdb:2025-12-06:42658
                    
                

Test fail (14/437)

x
+
TestAddons/serial/Volcano (374.07s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 29.046577ms
addons_test.go:876: volcano-admission stabilized in 32.631064ms
addons_test.go:868: volcano-scheduler stabilized in 34.429414ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-hld5k" [1ddd5808-c6e9-4c72-8c7e-2f29478962f1] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
addons_test.go:890: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:890: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-269722 -n addons-269722
addons_test.go:890: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-12-06 09:18:49.242802659 +0000 UTC m=+502.530179794
addons_test.go:890: (dbg) Run:  kubectl --context addons-269722 describe po volcano-scheduler-76c996c8bf-hld5k -n volcano-system
addons_test.go:890: (dbg) kubectl --context addons-269722 describe po volcano-scheduler-76c996c8bf-hld5k -n volcano-system:
Name:                 volcano-scheduler-76c996c8bf-hld5k
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 addons-269722/192.168.39.220
Start Time:           Sat, 06 Dec 2025 09:11:36 +0000
Labels:               app=volcano-scheduler
pod-template-hash=76c996c8bf
Annotations:          <none>
Status:               Pending
SeccompProfile:       RuntimeDefault
IP:                   10.244.0.21
IPs:
IP:           10.244.0.21
Controlled By:  ReplicaSet/volcano-scheduler-76c996c8bf
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9tx7p (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-9tx7p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  7m13s                  default-scheduler  Successfully assigned volcano-system/volcano-scheduler-76c996c8bf-hld5k to addons-269722
Warning  Failed     5m2s                   kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": failed to pull and unpack image "docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:3eb52da5409aa390f0a8ce2bc6c24da841dd0f7810baabefc0b18b5ca982e5b8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m32s (x5 over 7m10s)  kubelet            Pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Warning  Failed     3m31s (x4 over 6m29s)  kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": failed to pull and unpack image "docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m31s (x5 over 6m29s)  kubelet            Error: ErrImagePull
Warning  Failed     89s (x20 over 6m29s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    75s (x21 over 6m29s)   kubelet            Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
addons_test.go:890: (dbg) Run:  kubectl --context addons-269722 logs volcano-scheduler-76c996c8bf-hld5k -n volcano-system
addons_test.go:890: (dbg) Non-zero exit: kubectl --context addons-269722 logs volcano-scheduler-76c996c8bf-hld5k -n volcano-system: exit status 1 (71.93549ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-76c996c8bf-hld5k" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:890: kubectl --context addons-269722 logs volcano-scheduler-76c996c8bf-hld5k -n volcano-system: exit status 1
addons_test.go:891: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-269722 -n addons-269722
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 logs -n 25: (1.185093282s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                     ARGS                                                                                                                                                                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-600827 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-600827                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ -o=json --download-only -p download-only-345944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-345944                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ -o=json --download-only -p download-only-802744 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                   │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-600827                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-345944                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ --download-only -p binary-mirror-098159 --alsologtostderr --binary-mirror http://127.0.0.1:43773 --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ -p binary-mirror-098159                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ addons  │ disable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ start   │ -p addons-269722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:12 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:41.905948  388517 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:41.906056  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906068  388517 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:41.906073  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906290  388517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:10:41.906764  388517 out.go:368] Setting JSON to false
	I1206 09:10:41.907751  388517 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6792,"bootTime":1765005450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:41.907809  388517 start.go:143] virtualization: kvm guest
	I1206 09:10:41.909713  388517 out.go:179] * [addons-269722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:41.911209  388517 notify.go:221] Checking for updates...
	I1206 09:10:41.911229  388517 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:10:41.912645  388517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:41.913886  388517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:10:41.915020  388517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:41.919365  388517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:10:41.920580  388517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:10:41.921823  388517 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:41.950647  388517 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 09:10:41.951784  388517 start.go:309] selected driver: kvm2
	I1206 09:10:41.951797  388517 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:10:41.951808  388517 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:10:41.952432  388517 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:10:41.952640  388517 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:41.952666  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:10:41.952706  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:10:41.952714  388517 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:10:41.952753  388517 start.go:353] cluster config:
	{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:41.952877  388517 iso.go:125] acquiring lock: {Name:mk1a7d442a240aa1785a2e6e751e007c5a8723f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:41.954741  388517 out.go:179] * Starting "addons-269722" primary control-plane node in "addons-269722" cluster
	I1206 09:10:41.955614  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:10:41.955638  388517 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1206 09:10:41.955646  388517 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:41.955737  388517 preload.go:238] Found /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1206 09:10:41.955748  388517 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 09:10:41.956043  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:10:41.956066  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json: {Name:mka83bdbdc23544e613eb52d015ad5fe63a1e910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:41.956183  388517 start.go:360] acquireMachinesLock for addons-269722: {Name:mkc77d1cf752e1546ce7850a29dbe975ae7fa9b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:10:41.956225  388517 start.go:364] duration metric: took 30.995µs to acquireMachinesLock for "addons-269722"
	I1206 09:10:41.956247  388517 start.go:93] Provisioning new machine with config: &{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:10:41.956289  388517 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 09:10:41.957646  388517 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1206 09:10:41.957797  388517 start.go:159] libmachine.API.Create for "addons-269722" (driver="kvm2")
	I1206 09:10:41.957831  388517 client.go:173] LocalClient.Create starting
	I1206 09:10:41.957926  388517 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem
	I1206 09:10:41.993468  388517 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem
	I1206 09:10:42.078767  388517 main.go:143] libmachine: creating domain...
	I1206 09:10:42.078784  388517 main.go:143] libmachine: creating network...
	I1206 09:10:42.080023  388517 main.go:143] libmachine: found existing default network
	I1206 09:10:42.080210  388517 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.080787  388517 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d56770}
	I1206 09:10:42.080910  388517 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-269722</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.086592  388517 main.go:143] libmachine: creating private network mk-addons-269722 192.168.39.0/24...
	I1206 09:10:42.152917  388517 main.go:143] libmachine: private network mk-addons-269722 192.168.39.0/24 created
	I1206 09:10:42.153176  388517 main.go:143] libmachine: <network>
	  <name>mk-addons-269722</name>
	  <uuid>2336c74c-93b2-42b0-890b-3a8a8a25a922</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:fd:c9:1f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.153203  388517 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.153230  388517 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 09:10:42.153244  388517 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.153313  388517 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-383742/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 09:10:42.415061  388517 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa...
	I1206 09:10:42.429309  388517 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk...
	I1206 09:10:42.429369  388517 main.go:143] libmachine: Writing magic tar header
	I1206 09:10:42.429404  388517 main.go:143] libmachine: Writing SSH key tar header
	I1206 09:10:42.429498  388517 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.429571  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722
	I1206 09:10:42.429604  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 (perms=drwx------)
	I1206 09:10:42.429623  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines
	I1206 09:10:42.429636  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines (perms=drwxr-xr-x)
	I1206 09:10:42.429647  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.429656  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube (perms=drwxr-xr-x)
	I1206 09:10:42.429674  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742
	I1206 09:10:42.429704  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742 (perms=drwxrwxr-x)
	I1206 09:10:42.429722  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 09:10:42.429744  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 09:10:42.429758  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 09:10:42.429765  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 09:10:42.429775  388517 main.go:143] libmachine: checking permissions on dir: /home
	I1206 09:10:42.429781  388517 main.go:143] libmachine: skipping /home - not owner
	I1206 09:10:42.429788  388517 main.go:143] libmachine: defining domain...
	I1206 09:10:42.431063  388517 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:42.438342  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:8d:9c:cf in network default
	I1206 09:10:42.438932  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:42.438948  388517 main.go:143] libmachine: starting domain...
	I1206 09:10:42.438952  388517 main.go:143] libmachine: ensuring networks are active...
	I1206 09:10:42.439580  388517 main.go:143] libmachine: Ensuring network default is active
	I1206 09:10:42.439915  388517 main.go:143] libmachine: Ensuring network mk-addons-269722 is active
	I1206 09:10:42.440425  388517 main.go:143] libmachine: getting domain XML...
	I1206 09:10:42.441355  388517 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <uuid>faaa974f-af9d-46f8-a3b5-02afcdf78e43</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f2:80:b2'/>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:8d:9c:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:43.781082  388517 main.go:143] libmachine: waiting for domain to start...
	I1206 09:10:43.782318  388517 main.go:143] libmachine: domain is now running
	I1206 09:10:43.782338  388517 main.go:143] libmachine: waiting for IP...
	I1206 09:10:43.783021  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:43.783369  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:43.783385  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:43.783643  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:43.783696  388517 retry.go:31] will retry after 278.987444ms: waiting for domain to come up
	I1206 09:10:44.064124  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.064595  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.064606  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.064919  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.064957  388517 retry.go:31] will retry after 330.689041ms: waiting for domain to come up
	I1206 09:10:44.397460  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.397947  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.397962  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.398238  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.398277  388517 retry.go:31] will retry after 413.406233ms: waiting for domain to come up
	I1206 09:10:44.812999  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.813581  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.813601  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.813924  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.813970  388517 retry.go:31] will retry after 440.754763ms: waiting for domain to come up
	I1206 09:10:45.256730  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.257210  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.257228  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.257514  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.257556  388517 retry.go:31] will retry after 717.110818ms: waiting for domain to come up
	I1206 09:10:45.975902  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.976408  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.976424  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.976689  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.976722  388517 retry.go:31] will retry after 589.246662ms: waiting for domain to come up
	I1206 09:10:46.567419  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:46.567953  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:46.567973  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:46.568280  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:46.568326  388517 retry.go:31] will retry after 857.836192ms: waiting for domain to come up
	I1206 09:10:47.427627  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:47.428082  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:47.428097  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:47.428421  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:47.428475  388517 retry.go:31] will retry after 969.137484ms: waiting for domain to come up
	I1206 09:10:48.399647  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:48.400199  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:48.400215  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:48.400562  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:48.400615  388517 retry.go:31] will retry after 1.740343977s: waiting for domain to come up
	I1206 09:10:50.143512  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:50.143999  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:50.144014  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:50.144329  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:50.144363  388517 retry.go:31] will retry after 2.180103707s: waiting for domain to come up
	I1206 09:10:52.325956  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:52.326470  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:52.326485  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:52.326823  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:52.326870  388517 retry.go:31] will retry after 2.821995124s: waiting for domain to come up
	I1206 09:10:55.151850  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:55.152380  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:55.152397  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:55.152818  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:55.152881  388517 retry.go:31] will retry after 2.278330426s: waiting for domain to come up
	I1206 09:10:57.432300  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:57.432813  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:57.432829  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:57.433107  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:57.433144  388517 retry.go:31] will retry after 3.558016636s: waiting for domain to come up
	I1206 09:11:00.994805  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995368  388517 main.go:143] libmachine: domain addons-269722 has current primary IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995386  388517 main.go:143] libmachine: found domain IP: 192.168.39.220
	I1206 09:11:00.995394  388517 main.go:143] libmachine: reserving static IP address...
	I1206 09:11:00.995774  388517 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-269722", mac: "52:54:00:f2:80:b2", ip: "192.168.39.220"} in network mk-addons-269722
	I1206 09:11:01.169742  388517 main.go:143] libmachine: reserved static IP address 192.168.39.220 for domain addons-269722
	I1206 09:11:01.169781  388517 main.go:143] libmachine: waiting for SSH...
	I1206 09:11:01.169788  388517 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 09:11:01.172807  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173481  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.173514  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173694  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.173964  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.173979  388517 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 09:11:01.272210  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.272513  388517 main.go:143] libmachine: domain creation complete
	I1206 09:11:01.273828  388517 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:01.275801  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276155  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.276181  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276321  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.276511  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.276520  388517 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:01.373100  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 09:11:01.373130  388517 buildroot.go:166] provisioning hostname "addons-269722"
	I1206 09:11:01.375944  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376345  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.376372  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376608  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.376841  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.376854  388517 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-269722 && echo "addons-269722" | sudo tee /etc/hostname
	I1206 09:11:01.490874  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-269722
	
	I1206 09:11:01.493600  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.493995  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.494015  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.494204  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.494457  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.494481  388517 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-269722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-269722/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-269722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:01.601899  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.601925  388517 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-383742/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-383742/.minikube}
	I1206 09:11:01.601941  388517 buildroot.go:174] setting up certificates
	I1206 09:11:01.601950  388517 provision.go:84] configureAuth start
	I1206 09:11:01.604648  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.605083  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.605108  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607340  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607665  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.607684  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607799  388517 provision.go:143] copyHostCerts
	I1206 09:11:01.607857  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/ca.pem (1082 bytes)
	I1206 09:11:01.608028  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/cert.pem (1123 bytes)
	I1206 09:11:01.608130  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/key.pem (1675 bytes)
	I1206 09:11:01.608197  388517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem org=jenkins.addons-269722 san=[127.0.0.1 192.168.39.220 addons-269722 localhost minikube]
	I1206 09:11:01.761887  388517 provision.go:177] copyRemoteCerts
	I1206 09:11:01.761947  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:01.764212  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764543  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.764581  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764716  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:01.844794  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:11:01.873452  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:01.901904  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:11:01.930285  388517 provision.go:87] duration metric: took 328.321351ms to configureAuth
	I1206 09:11:01.930311  388517 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:11:01.930501  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:01.930521  388517 machine.go:97] duration metric: took 656.676665ms to provisionDockerMachine
	I1206 09:11:01.930531  388517 client.go:176] duration metric: took 19.972691553s to LocalClient.Create
	I1206 09:11:01.930551  388517 start.go:167] duration metric: took 19.97275355s to libmachine.API.Create "addons-269722"
	I1206 09:11:01.930596  388517 start.go:293] postStartSetup for "addons-269722" (driver="kvm2")
	I1206 09:11:01.930611  388517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:01.930658  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:01.933229  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933604  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.933625  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933768  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.013069  388517 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:02.017563  388517 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:11:02.017583  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/addons for local assets ...
	I1206 09:11:02.017651  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/files for local assets ...
	I1206 09:11:02.017684  388517 start.go:296] duration metric: took 87.076069ms for postStartSetup
	I1206 09:11:02.020584  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.020944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.020967  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.021198  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:11:02.021364  388517 start.go:128] duration metric: took 20.065065791s to createHost
	I1206 09:11:02.023485  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023794  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.023813  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023959  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:02.024173  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:02.024185  388517 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:11:02.121919  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765012262.085933657
	
	I1206 09:11:02.121936  388517 fix.go:216] guest clock: 1765012262.085933657
	I1206 09:11:02.121942  388517 fix.go:229] Guest: 2025-12-06 09:11:02.085933657 +0000 UTC Remote: 2025-12-06 09:11:02.021381724 +0000 UTC m=+20.161953678 (delta=64.551933ms)
	I1206 09:11:02.121960  388517 fix.go:200] guest clock delta is within tolerance: 64.551933ms
	I1206 09:11:02.121974  388517 start.go:83] releasing machines lock for "addons-269722", held for 20.165731842s
	I1206 09:11:02.124594  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.124944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.124973  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.125474  388517 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:02.125592  388517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:02.128433  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128746  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.128763  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128921  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.128989  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129445  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.129480  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129624  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.204247  388517 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:02.228305  388517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:02.234563  388517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:02.234633  388517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:02.260428  388517 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:02.260454  388517 start.go:496] detecting cgroup driver to use...
	I1206 09:11:02.260528  388517 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:11:02.297166  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:11:02.315488  388517 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:02.315555  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:02.332111  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:02.347076  388517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:02.491701  388517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:02.703514  388517 docker.go:234] disabling docker service ...
	I1206 09:11:02.703604  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:02.719452  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:02.733466  388517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:02.882667  388517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:03.020738  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:03.036166  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:03.057682  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:11:03.069874  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:11:03.081945  388517 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:11:03.082022  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:11:03.094105  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.106250  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:11:03.117968  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.130001  388517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:03.142658  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:11:03.154729  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:11:03.166983  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:11:03.178658  388517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:03.188759  388517 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 09:11:03.188803  388517 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 09:11:03.211314  388517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:03.224103  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:03.361032  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:03.404281  388517 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:11:03.404385  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:03.409523  388517 retry.go:31] will retry after 1.49666292s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1206 09:11:04.906469  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:04.912677  388517 start.go:564] Will wait 60s for crictl version
	I1206 09:11:04.912759  388517 ssh_runner.go:195] Run: which crictl
	I1206 09:11:04.916909  388517 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:11:04.952021  388517 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1206 09:11:04.952114  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:04.979176  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:05.046042  388517 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 1.7.23 ...
	I1206 09:11:05.113332  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113713  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:05.113733  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113904  388517 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:05.118728  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:05.134279  388517 kubeadm.go:884] updating cluster {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:05.134389  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:11:05.134436  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:05.163245  388517 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 09:11:05.163338  388517 ssh_runner.go:195] Run: which lz4
	I1206 09:11:05.167791  388517 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 09:11:05.172645  388517 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 09:11:05.172675  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (339763354 bytes)
	I1206 09:11:06.408453  388517 containerd.go:563] duration metric: took 1.240701247s to copy over tarball
	I1206 09:11:06.408534  388517 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 09:11:07.824785  388517 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.41620911s)
	I1206 09:11:07.824829  388517 containerd.go:570] duration metric: took 1.416348198s to extract the tarball
	I1206 09:11:07.824837  388517 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 09:11:07.876750  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.019449  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:08.055912  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.089979  388517 retry.go:31] will retry after 204.800226ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:08Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1206 09:11:08.295519  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.332986  388517 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:11:08.333019  388517 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:08.333035  388517 kubeadm.go:935] updating node { 192.168.39.220 8443 v1.34.2 containerd true true} ...
	I1206 09:11:08.333199  388517 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-269722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:11:08.333263  388517 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:11:08.363626  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:08.363652  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:08.363671  388517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:08.363694  388517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-269722 NodeName:addons-269722 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:08.363802  388517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-269722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:08.363898  388517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:08.376320  388517 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:08.376400  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:08.387974  388517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1206 09:11:08.408073  388517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:08.428105  388517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1206 09:11:08.448237  388517 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:08.452207  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:08.466654  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.612134  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:08.650190  388517 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722 for IP: 192.168.39.220
	I1206 09:11:08.650221  388517 certs.go:195] generating shared ca certs ...
	I1206 09:11:08.650248  388517 certs.go:227] acquiring lock for ca certs: {Name:mkf308ce4033be42aa40d533f6774edcee747959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.650426  388517 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key
	I1206 09:11:08.753472  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt ...
	I1206 09:11:08.753502  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt: {Name:mk0bc547e2c4a3698a714e2e67e37fe0843ac532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753663  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key ...
	I1206 09:11:08.753675  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key: {Name:mk257636778cdf81faeb62cfd641c994d65ea561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753763  388517 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key
	I1206 09:11:08.944161  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt ...
	I1206 09:11:08.944193  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt: {Name:mk7a27f62c25f1293f691b851f1b366a8491b851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944357  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key ...
	I1206 09:11:08.944369  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key: {Name:mk0dbe369ea38e824cffd9d96349344507b04d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944442  388517 certs.go:257] generating profile certs ...
	I1206 09:11:08.944507  388517 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key
	I1206 09:11:08.944522  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt with IP's: []
	I1206 09:11:09.004417  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt ...
	I1206 09:11:09.004443  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: {Name:mkc7ee580529997a0158c489e5de6aaaab4381ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004577  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key ...
	I1206 09:11:09.004587  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key: {Name:mk6aea14e5a790daaff4a5aa584541cbd36fa7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004653  388517 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9
	I1206 09:11:09.004671  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I1206 09:11:09.103453  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 ...
	I1206 09:11:09.103485  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9: {Name:mkb69edd53ea15cc714b2e6dcd35fb9bda8e0a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103642  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 ...
	I1206 09:11:09.103658  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9: {Name:mkbef642e3d05cf341f2d82d3597bab753cd2174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103728  388517 certs.go:382] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt
	I1206 09:11:09.103816  388517 certs.go:386] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key
	I1206 09:11:09.103876  388517 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key
	I1206 09:11:09.103896  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt with IP's: []
	I1206 09:11:09.195473  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt ...
	I1206 09:11:09.195504  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt: {Name:mk1ed5a652995aaac584bd788ffca22c7d7d4179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195645  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key ...
	I1206 09:11:09.195657  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key: {Name:mkb0905602ecfb2d53502a566a95204a8f98bd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195846  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 09:11:09.195899  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:09.195942  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:09.195967  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:09.196610  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:09.227924  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:09.257244  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:09.287169  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:11:09.319682  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:11:09.354785  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:11:09.391203  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:09.419761  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:11:09.448250  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:09.476343  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:09.495953  388517 ssh_runner.go:195] Run: openssl version
	I1206 09:11:09.502134  388517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.512996  388517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:09.524111  388517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529273  388517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529325  388517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.536780  388517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:09.547642  388517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:09.558961  388517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:09.563664  388517 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:09.563723  388517 kubeadm.go:401] StartCluster: {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:09.563812  388517 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:09.563854  388517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:09.597231  388517 cri.go:89] found id: ""
	I1206 09:11:09.597295  388517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:09.609197  388517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:09.619916  388517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:09.631012  388517 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:09.631028  388517 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:09.631067  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:09.641398  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:09.641442  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:09.652328  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:09.662630  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:09.662683  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:09.673582  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.683944  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:09.683997  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.694924  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:09.705284  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:09.705332  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:09.716270  388517 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 09:11:09.765023  388517 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:09.765245  388517 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:09.858054  388517 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:09.858229  388517 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:09.858396  388517 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:09.865139  388517 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:09.920280  388517 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:09.920378  388517 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:09.920462  388517 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:10.105985  388517 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:10.865814  388517 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:10.897033  388517 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:11.249180  388517 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:11.405265  388517 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:11.405459  388517 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.595783  388517 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:11.595930  388517 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.685113  388517 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:11.795320  388517 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:12.056322  388517 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:12.057602  388517 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:12.245522  388517 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:12.344100  388517 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:12.481696  388517 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:12.805057  388517 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:12.987909  388517 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:12.988354  388517 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:12.990637  388517 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:12.992591  388517 out.go:252]   - Booting up control plane ...
	I1206 09:11:12.992683  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:12.992757  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:12.992829  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:13.009376  388517 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:13.009528  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:13.016083  388517 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:13.016157  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:13.016213  388517 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:13.195314  388517 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:13.195457  388517 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:13.696155  388517 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.400144ms
	I1206 09:11:13.701317  388517 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:13.701412  388517 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.220:8443/livez
	I1206 09:11:13.701516  388517 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:13.701609  388517 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:15.925448  388517 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.2258309s
	I1206 09:11:17.097937  388517 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.399298925s
	I1206 09:11:19.199961  388517 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502821586s
	I1206 09:11:19.217728  388517 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:19.231172  388517 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:19.244842  388517 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:19.245047  388517 kubeadm.go:319] [mark-control-plane] Marking the node addons-269722 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:19.255597  388517 kubeadm.go:319] [bootstrap-token] Using token: tnc6di.0o5js773tkjcekar
	I1206 09:11:19.256827  388517 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:19.256963  388517 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:19.261388  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:19.269766  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:19.273599  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:19.281952  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:19.288853  388517 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:19.605592  388517 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:20.070227  388517 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:20.605934  388517 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:20.606844  388517 kubeadm.go:319] 
	I1206 09:11:20.606929  388517 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:20.606938  388517 kubeadm.go:319] 
	I1206 09:11:20.607026  388517 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:20.607033  388517 kubeadm.go:319] 
	I1206 09:11:20.607064  388517 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:20.607146  388517 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:20.607224  388517 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:20.607234  388517 kubeadm.go:319] 
	I1206 09:11:20.607327  388517 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:20.607350  388517 kubeadm.go:319] 
	I1206 09:11:20.607426  388517 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:20.607434  388517 kubeadm.go:319] 
	I1206 09:11:20.607510  388517 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:20.607639  388517 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:20.607758  388517 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:20.607774  388517 kubeadm.go:319] 
	I1206 09:11:20.607894  388517 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:20.607992  388517 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:20.608007  388517 kubeadm.go:319] 
	I1206 09:11:20.608129  388517 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608283  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 \
	I1206 09:11:20.608307  388517 kubeadm.go:319] 	--control-plane 
	I1206 09:11:20.608316  388517 kubeadm.go:319] 
	I1206 09:11:20.608391  388517 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:20.608397  388517 kubeadm.go:319] 
	I1206 09:11:20.608494  388517 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608638  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 
	I1206 09:11:20.609835  388517 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:20.609893  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:20.609910  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:20.611407  388517 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:11:20.612520  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:11:20.630100  388517 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:11:20.652382  388517 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:20.652515  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:20.652537  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-269722 minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-269722 minikube.k8s.io/primary=true
	I1206 09:11:20.694430  388517 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:20.784013  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.284280  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.784935  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.284329  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.784096  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.284134  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.784412  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.285006  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.365500  388517 kubeadm.go:1114] duration metric: took 3.713041621s to wait for elevateKubeSystemPrivileges
	I1206 09:11:24.365554  388517 kubeadm.go:403] duration metric: took 14.801837471s to StartCluster
	I1206 09:11:24.365583  388517 settings.go:142] acquiring lock: {Name:mk5046213dcb1abe0d7fe7b15722aa4884a98be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.365735  388517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:11:24.366166  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/kubeconfig: {Name:mka1b03c13e1e115a4ba1af8cb483b83d246825c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.366385  388517 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:11:24.366393  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:11:24.366467  388517 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:11:24.366579  388517 addons.go:70] Setting yakd=true in profile "addons-269722"
	I1206 09:11:24.366593  388517 addons.go:70] Setting inspektor-gadget=true in profile "addons-269722"
	I1206 09:11:24.366594  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366606  388517 addons.go:239] Setting addon yakd=true in "addons-269722"
	I1206 09:11:24.366612  388517 addons.go:239] Setting addon inspektor-gadget=true in "addons-269722"
	I1206 09:11:24.366637  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366644  388517 addons.go:70] Setting default-storageclass=true in profile "addons-269722"
	I1206 09:11:24.366651  388517 addons.go:70] Setting gcp-auth=true in profile "addons-269722"
	I1206 09:11:24.366663  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-269722"
	I1206 09:11:24.366682  388517 mustload.go:66] Loading cluster: addons-269722
	I1206 09:11:24.366726  388517 addons.go:70] Setting registry-creds=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting cloud-spanner=true in profile "addons-269722"
	I1206 09:11:24.366778  388517 addons.go:239] Setting addon registry-creds=true in "addons-269722"
	I1206 09:11:24.366781  388517 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-269722"
	I1206 09:11:24.366784  388517 addons.go:239] Setting addon cloud-spanner=true in "addons-269722"
	I1206 09:11:24.366787  388517 addons.go:70] Setting storage-provisioner=true in profile "addons-269722"
	I1206 09:11:24.366800  388517 addons.go:239] Setting addon storage-provisioner=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366818  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366819  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366821  388517 addons.go:70] Setting metrics-server=true in profile "addons-269722"
	I1206 09:11:24.366836  388517 addons.go:239] Setting addon metrics-server=true in "addons-269722"
	I1206 09:11:24.366850  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366901  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366979  388517 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.367005  388517 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-269722"
	I1206 09:11:24.367028  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367504  388517 addons.go:70] Setting registry=true in profile "addons-269722"
	I1206 09:11:24.367531  388517 addons.go:239] Setting addon registry=true in "addons-269722"
	I1206 09:11:24.367561  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367879  388517 addons.go:70] Setting ingress=true in profile "addons-269722"
	I1206 09:11:24.367904  388517 addons.go:239] Setting addon ingress=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367940  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367975  388517 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-269722"
	I1206 09:11:24.367998  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-269722"
	I1206 09:11:24.368012  388517 addons.go:70] Setting volcano=true in profile "addons-269722"
	I1206 09:11:24.368028  388517 addons.go:239] Setting addon volcano=true in "addons-269722"
	I1206 09:11:24.368051  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368065  388517 addons.go:70] Setting volumesnapshots=true in profile "addons-269722"
	I1206 09:11:24.368083  388517 addons.go:239] Setting addon volumesnapshots=true in "addons-269722"
	I1206 09:11:24.368108  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368318  388517 addons.go:70] Setting ingress-dns=true in profile "addons-269722"
	I1206 09:11:24.368334  388517 addons.go:239] Setting addon ingress-dns=true in "addons-269722"
	I1206 09:11:24.368504  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368582  388517 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-269722"
	I1206 09:11:24.368650  388517 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:24.368672  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368873  388517 out.go:179] * Verifying Kubernetes components...
	I1206 09:11:24.366646  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.370225  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:24.371769  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.373754  388517 addons.go:239] Setting addon default-storageclass=true in "addons-269722"
	I1206 09:11:24.373789  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.374301  388517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:11:24.374379  388517 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:11:24.375268  388517 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:11:24.375275  388517 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:11:24.375328  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:24.375343  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:11:24.376013  388517 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:11:24.376046  388517 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:24.376074  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:11:24.376035  388517 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:11:24.376134  388517 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-269722"
	I1206 09:11:24.376581  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.376790  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:11:24.376809  388517 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:11:24.376827  388517 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.376841  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:11:24.376847  388517 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:11:24.377596  388517 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:24.377612  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:11:24.378229  388517 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:11:24.378237  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:11:24.378252  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:11:24.378268  388517 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:11:24.378231  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.378298  388517 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1206 09:11:24.378904  388517 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:11:24.378904  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:11:24.378253  388517 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:11:24.379492  388517 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.379507  388517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:11:24.379650  388517 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:11:24.379665  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:11:24.379672  388517 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:24.379683  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:11:24.380334  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:24.380373  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:11:24.380344  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:11:24.380559  388517 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:11:24.380561  388517 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:11:24.381552  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:11:24.381577  388517 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1206 09:11:24.382302  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.382322  388517 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:24.382342  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:11:24.384119  388517 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1206 09:11:24.384134  388517 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:11:24.385853  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.386682  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:11:24.386986  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:24.387009  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:11:24.387404  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.387763  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.387799  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388004  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388093  388517 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:11:24.388126  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388701  388517 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:24.388724  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1206 09:11:24.389099  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.389150  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.389220  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:11:24.389288  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:24.389303  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:11:24.389924  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.389981  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390249  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390264  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390288  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390293  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390722  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.390908  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390941  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391542  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:11:24.391835  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.392214  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.392478  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.393141  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394085  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394128  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394319  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394473  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394510  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394522  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394539  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394585  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:11:24.394628  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394751  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.395613  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396316  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396359  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.396795  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.396833  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397321  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397322  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397417  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397434  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397472  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397481  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397505  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397761  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:11:24.397813  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397879  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398815  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:11:24.398876  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:11:24.398990  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399146  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399416  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399466  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399501  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399518  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399553  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399720  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.399930  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.400166  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.400198  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.400399  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.401986  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402373  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.402406  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402558  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	W1206 09:11:24.544745  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544776  388517 retry.go:31] will retry after 167.524935ms: ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.544834  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544842  388517 retry.go:31] will retry after 337.340492ms: ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.586807  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.586836  388517 retry.go:31] will retry after 361.026308ms: ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.720251  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:24.720260  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:11:24.915042  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.943642  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.946926  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:25.098136  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:25.119770  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:11:25.119795  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:11:25.208175  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:25.224407  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:11:25.224432  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:11:25.225309  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:25.232666  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:11:25.232682  388517 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:11:25.246755  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:11:25.246777  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:11:25.247663  388517 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:11:25.247683  388517 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:11:25.270838  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:25.331361  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:25.449965  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:25.469046  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:25.613424  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:11:25.613456  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:11:25.633923  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:11:25.633954  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:11:25.657079  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:11:25.657110  388517 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:11:25.695667  388517 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:25.695693  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:11:25.696553  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:25.756474  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:11:25.756502  388517 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:11:26.160704  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:11:26.160736  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:11:26.284633  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:11:26.284662  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:11:26.286985  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:26.434395  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:11:26.434422  388517 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:11:26.465197  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.465225  388517 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:11:26.661217  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.661249  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:11:26.705778  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.774501  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:11:26.774527  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:11:26.849719  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.906080  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:11:26.906136  388517 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:11:27.000268  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:11:27.000294  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:11:27.610778  388517 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:27.610815  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:11:27.800583  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:11:27.800607  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:11:27.882544  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:28.272413  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:11:28.272451  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:11:28.298383  388517 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.578087161s)
	I1206 09:11:28.298435  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.38335524s)
	I1206 09:11:28.298380  388517 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.578018491s)
	I1206 09:11:28.298514  388517 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 09:11:28.298551  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.354877639s)
	I1206 09:11:28.298640  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.351685893s)
	I1206 09:11:28.299174  388517 node_ready.go:35] waiting up to 6m0s for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373103  388517 node_ready.go:49] node "addons-269722" is "Ready"
	I1206 09:11:28.373131  388517 node_ready.go:38] duration metric: took 73.939285ms for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373146  388517 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:28.373191  388517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:28.564603  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:11:28.564627  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:11:28.805525  388517 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-269722" context rescaled to 1 replicas
	I1206 09:11:28.892887  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:11:28.892912  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:11:29.154236  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:29.154271  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:11:29.383179  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:31.838578  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.740399341s)
	I1206 09:11:31.842964  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:11:31.846059  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846625  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:31.846661  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846877  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:32.206384  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:11:32.398884  388517 addons.go:239] Setting addon gcp-auth=true in "addons-269722"
	I1206 09:11:32.398959  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:32.401192  388517 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:11:32.404036  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404508  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:32.404543  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404739  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:33.380508  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.172285689s)
	I1206 09:11:33.380567  388517 addons.go:495] Verifying addon ingress=true in "addons-269722"
	I1206 09:11:33.380566  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.155226513s)
	I1206 09:11:33.380618  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.109753242s)
	I1206 09:11:33.382778  388517 out.go:179] * Verifying ingress addon...
	I1206 09:11:33.384997  388517 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:11:33.394151  388517 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:11:33.394167  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:33.983745  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.442405  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.961428  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.544843  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.959086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.477596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.933661  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.492983  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.907682  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.464342  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.476878  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.145459322s)
	I1206 09:11:38.476953  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.026949113s)
	I1206 09:11:38.477048  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.007974684s)
	I1206 09:11:38.477116  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.780538742s)
	I1206 09:11:38.477233  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.190220804s)
	I1206 09:11:38.477253  388517 addons.go:495] Verifying addon registry=true in "addons-269722"
	I1206 09:11:38.477312  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.77149962s)
	I1206 09:11:38.477336  388517 addons.go:495] Verifying addon metrics-server=true in "addons-269722"
	I1206 09:11:38.477363  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.627610125s)
	I1206 09:11:38.477525  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.594927288s)
	I1206 09:11:38.477544  388517 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.104332654s)
	I1206 09:11:38.477571  388517 api_server.go:72] duration metric: took 14.11116064s to wait for apiserver process to appear ...
	I1206 09:11:38.477583  388517 api_server.go:88] waiting for apiserver healthz status ...
	W1206 09:11:38.477581  388517 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477604  388517 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1206 09:11:38.477604  388517 retry.go:31] will retry after 298.178363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477795  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.094573264s)
	I1206 09:11:38.477823  388517 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:38.477842  388517 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.076624226s)
	I1206 09:11:38.478884  388517 out.go:179] * Verifying registry addon...
	I1206 09:11:38.478890  388517 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-269722 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:11:38.479684  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:38.479686  388517 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:11:38.481128  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:11:38.482570  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:11:38.482875  388517 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:11:38.483935  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:11:38.483956  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:11:38.542927  388517 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1206 09:11:38.560082  388517 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:11:38.560109  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:38.560250  388517 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:11:38.560266  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:38.564812  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:11:38.564836  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:11:38.577730  388517 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:38.577765  388517 api_server.go:131] duration metric: took 100.173477ms to wait for apiserver health ...
	I1206 09:11:38.577777  388517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:38.641466  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.641493  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:11:38.668346  388517 system_pods.go:59] 20 kube-system pods found
	I1206 09:11:38.668390  388517 system_pods.go:61] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.668407  388517 system_pods.go:61] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.668417  388517 system_pods.go:61] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.668435  388517 system_pods.go:61] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.668450  388517 system_pods.go:61] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.668460  388517 system_pods.go:61] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.668469  388517 system_pods.go:61] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.668476  388517 system_pods.go:61] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.668484  388517 system_pods.go:61] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.668493  388517 system_pods.go:61] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.668501  388517 system_pods.go:61] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.668508  388517 system_pods.go:61] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.668520  388517 system_pods.go:61] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.668526  388517 system_pods.go:61] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.668535  388517 system_pods.go:61] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.668543  388517 system_pods.go:61] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.668558  388517 system_pods.go:61] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.668574  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668644  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668650  388517 system_pods.go:61] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.668660  388517 system_pods.go:74] duration metric: took 90.874732ms to wait for pod list to return data ...
	I1206 09:11:38.668672  388517 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:38.705679  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.776568  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:38.781850  388517 default_sa.go:45] found service account: "default"
	I1206 09:11:38.781885  388517 default_sa.go:55] duration metric: took 113.206818ms for default service account to be created ...
	I1206 09:11:38.781896  388517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:38.893236  388517 system_pods.go:86] 20 kube-system pods found
	I1206 09:11:38.893269  388517 system_pods.go:89] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.893310  388517 system_pods.go:89] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.893318  388517 system_pods.go:89] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.893328  388517 system_pods.go:89] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.893334  388517 system_pods.go:89] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.893340  388517 system_pods.go:89] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.893344  388517 system_pods.go:89] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.893348  388517 system_pods.go:89] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.893352  388517 system_pods.go:89] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.893357  388517 system_pods.go:89] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.893361  388517 system_pods.go:89] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.893364  388517 system_pods.go:89] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.893369  388517 system_pods.go:89] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.893374  388517 system_pods.go:89] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.893379  388517 system_pods.go:89] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.893383  388517 system_pods.go:89] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.893389  388517 system_pods.go:89] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.893395  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893400  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893403  388517 system_pods.go:89] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.893410  388517 system_pods.go:126] duration metric: took 111.509411ms to wait for k8s-apps to be running ...
	I1206 09:11:38.893420  388517 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:11:38.893463  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:39.039991  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.105053  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.105115  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.435086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.577305  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.578361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.891557  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.023055  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.023335  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.299367  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.593645009s)
	I1206 09:11:40.300442  388517 addons.go:495] Verifying addon gcp-auth=true in "addons-269722"
	I1206 09:11:40.302591  388517 out.go:179] * Verifying gcp-auth addon...
	I1206 09:11:40.304667  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:11:40.334052  388517 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:11:40.334086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.389629  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.490307  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.490431  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.813628  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.836756  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.060127251s)
	I1206 09:11:40.836796  388517 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.943309249s)
	I1206 09:11:40.836822  388517 system_svc.go:56] duration metric: took 1.943395217s WaitForService to wait for kubelet
	I1206 09:11:40.836835  388517 kubeadm.go:587] duration metric: took 16.470422509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:40.836870  388517 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:11:40.843939  388517 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 09:11:40.843963  388517 node_conditions.go:123] node cpu capacity is 2
	I1206 09:11:40.843980  388517 node_conditions.go:105] duration metric: took 7.101649ms to run NodePressure ...
	I1206 09:11:40.844002  388517 start.go:242] waiting for startup goroutines ...
	I1206 09:11:40.890430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.986853  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.992475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.355777  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.389062  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.487963  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.489146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.808891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.889779  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.985833  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.987429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.308166  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.409444  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.510304  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.511035  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.809432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.888458  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.984315  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.987586  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.308446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.388943  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.496391  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.496607  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.808230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.888549  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.984398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.986840  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.312899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.514152  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.514383  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.515204  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.811435  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.888384  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.984563  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.986735  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.307401  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.388721  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.486271  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.488952  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.808083  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.888466  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.985838  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.987005  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.309162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.390486  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.484411  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.486023  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.809473  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.888547  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.984691  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.987824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.308194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.388621  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.488407  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.488489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.808350  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.984429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.986654  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.308303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.391026  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.664162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.666762  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.808417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.888241  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.983979  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.986690  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.308241  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.388925  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.484568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.486742  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.809515  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.889646  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.987428  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.988527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.366787  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.389057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.486489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.487907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.810176  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.910430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.984648  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.992028  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.319081  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.388999  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.489012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:51.492499  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.808942  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.896270  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.990446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.992371  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.309057  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.389352  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.484414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.486682  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.809190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.888338  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.991907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.992417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.307785  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.390249  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.484717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.486614  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:53.810677  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.889084  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.987650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.990484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.315414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.395125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.494235  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.494236  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.824289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.888711  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.984659  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.987146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.308481  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.390618  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.484329  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.485893  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.809298  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.895192  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.989404  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.993237  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.311289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.389393  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.487349  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.487525  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.808606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.889213  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.985510  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.991535  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.308723  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.388636  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.488790  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:57.490213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.809073  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.887830  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.984304  388517 kapi.go:107] duration metric: took 19.503171238s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:11:57.987671  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.309052  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.389257  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:58.490899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.809457  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.890577  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.025290  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.309296  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.392111  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.492783  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.807475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.892512  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.986432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.357752  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.391649  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.485367  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.809392  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.887883  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.986127  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.312877  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.413507  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.486873  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.809042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.889057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.986042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.311892  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.390027  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.491375  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.923841  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.927183  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.986095  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.309017  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.390050  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.486194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.812456  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.892317  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.986695  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.308544  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.389102  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.486496  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.810301  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.986924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.308837  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.390825  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.485772  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.807540  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.888733  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.985799  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.310889  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.389329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.492425  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.808561  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.888635  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.985484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.309758  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.390275  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.486771  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.807681  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.888485  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.987584  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.309272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.388617  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.487646  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.809312  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.888519  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.988459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.309597  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.411374  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:09.487378  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.812712  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.912033  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.012090  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.308609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.389736  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.488553  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.808609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.893781  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.986159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.669172  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.670324  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.671190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.811594  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.892535  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.985928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.310097  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.390596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.489116  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.809321  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.890619  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.987653  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.309120  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.388316  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.488650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.808316  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.889333  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.986213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.308276  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.388283  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.487207  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.808143  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.888955  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.986279  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.309037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.388329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.488214  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.810501  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.896511  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.986845  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.307928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.390728  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.485976  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.816944  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.970568  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.988372  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.312911  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.390564  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.486836  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.811792  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.891576  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.988049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.309919  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.388844  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.486086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.809596  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.890914  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.986230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.310480  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.410702  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.486633  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.807918  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.888811  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.987072  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.309606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.412057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.512925  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.817199  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.949254  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.990626  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.312159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.389204  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.488639  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.810891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.888759  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.988415  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.309245  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.391268  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.486340  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.808382  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.889770  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.988997  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.309823  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.388910  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.489579  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.810562  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.889125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.986750  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.308898  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.389306  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.486339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.809381  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.888322  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.987056  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.309252  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.388372  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.486924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.810099  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.891569  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.993945  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.314253  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.503975  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.504104  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.811809  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.889063  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.990570  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.308661  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.388783  388517 kapi.go:107] duration metric: took 54.003785227s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:12:27.539433  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.808824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.987339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.311281  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.487383  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.810397  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.990303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.309345  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.488470  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.811844  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.987408  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.311108  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.487049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.807650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.986406  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.309915  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.486400  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.814032  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.989103  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.311817  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.486527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.808601  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.989352  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.309084  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.486427  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.809272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.986717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:34.308891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:34.486989  388517 kapi.go:107] duration metric: took 56.004420234s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:12:34.808808  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.310012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.808588  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.309169  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.808993  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.310066  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.808459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.308629  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.811741  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.309361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.809037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.308704  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.808398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.307791  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.808294  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.308956  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.809502  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.307669  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.810175  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.309568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.809320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.309320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.807962  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.311821  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.808138  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:47.308750  388517 kapi.go:107] duration metric: took 1m7.004080739s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:12:47.309965  388517 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-269722 cluster.
	I1206 09:12:47.310907  388517 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:12:47.312086  388517 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:12:47.313288  388517 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, registry-creds, volcano, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1206 09:12:47.314294  388517 addons.go:530] duration metric: took 1m22.947828238s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner inspektor-gadget registry-creds volcano cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1206 09:12:47.314341  388517 start.go:247] waiting for cluster config update ...
	I1206 09:12:47.314373  388517 start.go:256] writing updated cluster config ...
	I1206 09:12:47.314678  388517 ssh_runner.go:195] Run: rm -f paused
	I1206 09:12:47.321984  388517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:47.325938  388517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.331363  388517 pod_ready.go:94] pod "coredns-66bc5c9577-l7sr8" is "Ready"
	I1206 09:12:47.331382  388517 pod_ready.go:86] duration metric: took 5.423953ms for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.333935  388517 pod_ready.go:83] waiting for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.339670  388517 pod_ready.go:94] pod "etcd-addons-269722" is "Ready"
	I1206 09:12:47.339686  388517 pod_ready.go:86] duration metric: took 5.735911ms for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.341852  388517 pod_ready.go:83] waiting for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.348825  388517 pod_ready.go:94] pod "kube-apiserver-addons-269722" is "Ready"
	I1206 09:12:47.348841  388517 pod_ready.go:86] duration metric: took 6.965989ms for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.351661  388517 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.728666  388517 pod_ready.go:94] pod "kube-controller-manager-addons-269722" is "Ready"
	I1206 09:12:47.728694  388517 pod_ready.go:86] duration metric: took 377.017246ms for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.928250  388517 pod_ready.go:83] waiting for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.326318  388517 pod_ready.go:94] pod "kube-proxy-c2km9" is "Ready"
	I1206 09:12:48.326347  388517 pod_ready.go:86] duration metric: took 398.070754ms for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.527945  388517 pod_ready.go:83] waiting for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925436  388517 pod_ready.go:94] pod "kube-scheduler-addons-269722" is "Ready"
	I1206 09:12:48.925477  388517 pod_ready.go:86] duration metric: took 397.504009ms for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925497  388517 pod_ready.go:40] duration metric: took 1.603486959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:48.968795  388517 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:12:48.970523  388517 out.go:179] * Done! kubectl is now configured to use "addons-269722" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	cd72f94e2772e       bb3a3eb26ca3c       47 seconds ago      Running             volcano-controllers                      0                   a56a4ec176b2d       volcano-controllers-6fd4f85cb8-6xqwk       volcano-system
	2454f875fd20c       7a12f2aed60be       6 minutes ago       Running             gcp-auth                                 0                   062c3aba630b2       gcp-auth-78565c9fb4-ncq5v                  gcp-auth
	5eac265ea5e4c       dcc78144955fa       6 minutes ago       Running             admission                                0                   618c92cad1fbf       volcano-admission-6c447bd768-72f4n         volcano-system
	29c2d038bf437       738351fd438f0       6 minutes ago       Running             csi-snapshotter                          0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	5d8ecc80d5382       931dbfd16f87c       6 minutes ago       Running             csi-provisioner                          0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	fd0e1a7571386       e899260153aed       6 minutes ago       Running             liveness-probe                           0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	24d11a8b11e79       e255e073c508c       6 minutes ago       Running             hostpath                                 0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	5ca832afab7b5       88ef14a257f42       6 minutes ago       Running             node-driver-registrar                    0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	c4cccebac4fc4       97fe896f8c07b       6 minutes ago       Running             controller                               0                   9ee054c3901ad       ingress-nginx-controller-6c8bf45fb-ndk8c   ingress-nginx
	2630d4a83ae5f       19a639eda60f0       6 minutes ago       Running             csi-resizer                              0                   a312cf43898ad       csi-hostpath-resizer-0                     kube-system
	5bd7e91038ad6       a1ed5895ba635       6 minutes ago       Running             csi-external-health-monitor-controller   0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	1ff38ec18e78f       59cbb42146a37       6 minutes ago       Running             csi-attacher                             0                   73074a1a93680       csi-hostpath-attacher-0                    kube-system
	d8b53dbf90582       dcc78144955fa       6 minutes ago       Exited              main                                     0                   60f04fc61981f       volcano-admission-init-kx7hz               volcano-system
	278c91c11ce27       aa61ee9c70bc4       6 minutes ago       Running             volume-snapshot-controller               0                   4bcff1b74bfec       snapshot-controller-7d9fbc56b8-qbp6w       kube-system
	31ec84f4556b1       aa61ee9c70bc4       6 minutes ago       Running             volume-snapshot-controller               0                   49c8968cc1ce1       snapshot-controller-7d9fbc56b8-v9sd5       kube-system
	864a2ecb4396f       884bd0ac01c8f       6 minutes ago       Exited              patch                                    0                   3ddf53bb8795f       ingress-nginx-admission-patch-xpn6k        ingress-nginx
	2850465598faa       e16d1e3a10667       6 minutes ago       Running             local-path-provisioner                   0                   3e10d3adfa610       local-path-provisioner-648f6765c9-g86zf    local-path-storage
	622137fca4f51       e6a089fe3492b       6 minutes ago       Running             gadget                                   0                   45bd70456d7f9       gadget-clpmt                               gadget
	c2e7e0b7588b1       884bd0ac01c8f       6 minutes ago       Exited              create                                   0                   1ca23ac12776f       ingress-nginx-admission-create-kl75g       ingress-nginx
	ad42e03079189       c7e3a3eeaf5ed       6 minutes ago       Running             yakd                                     0                   ffded908b217e       yakd-dashboard-5ff678cb9-8bmdx             yakd-dashboard
	a4cadba986c3f       b1c9f9ef5f0c2       6 minutes ago       Running             registry-proxy                           0                   b2322abfb0189       registry-proxy-hbw67                       kube-system
	bed650da7c33c       e4e5706768198       6 minutes ago       Running             registry                                 0                   e7b29f101dd31       registry-6b586f9694-rbbt6                  kube-system
	e6704df8ec1f5       b9e1e3849e070       6 minutes ago       Running             metrics-server                           0                   e3d6028040554       metrics-server-85b7d694d7-h2jq2            kube-system
	2774623c95b6c       b6ab53fbfedaa       6 minutes ago       Running             minikube-ingress-dns                     0                   a84f9f0b8a344       kube-ingress-dns-minikube                  kube-system
	ea52f4611a442       83572eb9c0645       7 minutes ago       Running             cloud-spanner-emulator                   0                   5df46abdcb1bc       cloud-spanner-emulator-5bdddb765-7m79k     default
	d9e6d13d8e418       d5e667c0f2bb6       7 minutes ago       Running             amd-gpu-device-plugin                    0                   479fca73c33e3       amd-gpu-device-plugin-4x5bp                kube-system
	122bfb2d0c439       cb251c438ab2d       7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   562e20139e870       nvidia-device-plugin-daemonset-knqvl       kube-system
	a9394a7445ed6       6e38f40d628db       7 minutes ago       Running             storage-provisioner                      0                   89b1f84c8945f       storage-provisioner                        kube-system
	e636e6172c8c9       52546a367cc9e       7 minutes ago       Running             coredns                                  0                   18cf9f60905af       coredns-66bc5c9577-l7sr8                   kube-system
	d9ab1c94b0adc       8aa150647e88a       7 minutes ago       Running             kube-proxy                               0                   7ce46fc8fe779       kube-proxy-c2km9                           kube-system
	f7319b640fed7       a3e246e9556e9       7 minutes ago       Running             etcd                                     0                   5d2b5e40c2235       etcd-addons-269722                         kube-system
	31363d509c1e7       88320b5498ff2       7 minutes ago       Running             kube-scheduler                           0                   f53f47f2f0dc9       kube-scheduler-addons-269722               kube-system
	c301895eb03e7       01e8bacf0f500       7 minutes ago       Running             kube-controller-manager                  0                   afc5069ef7820       kube-controller-manager-addons-269722      kube-system
	95341ea890f7a       a5f569d49a979       7 minutes ago       Running             kube-apiserver                           0                   fb1d3f9401a55       kube-apiserver-addons-269722               kube-system
	
	
	==> containerd <==
	Dec 06 09:15:18 addons-269722 containerd[831]: time="2025-12-06T09:15:18.191155390Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:15:18 addons-269722 containerd[831]: time="2025-12-06T09:15:18.844124022Z" level=error msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:15:18 addons-269722 containerd[831]: time="2025-12-06T09:15:18.844232830Z" level=info msg="stop pulling image docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: active requests=0, bytes read=11015"
	Dec 06 09:15:19 addons-269722 containerd[831]: time="2025-12-06T09:15:19.924124846Z" level=info msg="PullImage \"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\""
	Dec 06 09:15:19 addons-269722 containerd[831]: time="2025-12-06T09:15:19.928629023Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:15:20 addons-269722 containerd[831]: time="2025-12-06T09:15:20.175326076Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:15:20 addons-269722 containerd[831]: time="2025-12-06T09:15:20.831801529Z" level=error msg="PullImage \"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:15:20 addons-269722 containerd[831]: time="2025-12-06T09:15:20.831934811Z" level=info msg="stop pulling image docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: active requests=0, bytes read=11063"
	Dec 06 09:18:00 addons-269722 containerd[831]: time="2025-12-06T09:18:00.922484662Z" level=info msg="PullImage \"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\""
	Dec 06 09:18:00 addons-269722 containerd[831]: time="2025-12-06T09:18:00.925831227Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:18:01 addons-269722 containerd[831]: time="2025-12-06T09:18:01.185121994Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.140084863Z" level=info msg="ImageCreate event name:\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.141690960Z" level=info msg="stop pulling image docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: active requests=0, bytes read=37239713"
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.143594361Z" level=info msg="ImageCreate event name:\"sha256:bb3a3eb26ca3ce4ff464c11efd22f3ee4ae563a39b7bc1622ad1abf547280f3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.144941103Z" level=info msg="Pulled image \"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\" with image id \"sha256:bb3a3eb26ca3ce4ff464c11efd22f3ee4ae563a39b7bc1622ad1abf547280f3a\", repo tag \"\", repo digest \"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\", size \"41028492\" in 2.222369666s"
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.144986807Z" level=info msg="PullImage \"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\" returns image reference \"sha256:bb3a3eb26ca3ce4ff464c11efd22f3ee4ae563a39b7bc1622ad1abf547280f3a\""
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.152559700Z" level=info msg="CreateContainer within sandbox \"a56a4ec176b2d3b9dca2f7d21a7521d6eb052d1c606b24a85ed19a51b4a7f05b\" for container &ContainerMetadata{Name:volcano-controllers,Attempt:0,}"
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.170564163Z" level=info msg="CreateContainer within sandbox \"a56a4ec176b2d3b9dca2f7d21a7521d6eb052d1c606b24a85ed19a51b4a7f05b\" for &ContainerMetadata{Name:volcano-controllers,Attempt:0,} returns container id \"cd72f94e2772e9f449038fbb11a76fe948397a4d883805f7113e30d4b04a9c86\""
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.171031953Z" level=info msg="StartContainer for \"cd72f94e2772e9f449038fbb11a76fe948397a4d883805f7113e30d4b04a9c86\""
	Dec 06 09:18:03 addons-269722 containerd[831]: time="2025-12-06T09:18:03.274340266Z" level=info msg="StartContainer for \"cd72f94e2772e9f449038fbb11a76fe948397a4d883805f7113e30d4b04a9c86\" returns successfully"
	Dec 06 09:18:12 addons-269722 containerd[831]: time="2025-12-06T09:18:12.922770085Z" level=info msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\""
	Dec 06 09:18:12 addons-269722 containerd[831]: time="2025-12-06T09:18:12.926723239Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:18:13 addons-269722 containerd[831]: time="2025-12-06T09:18:13.228132751Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:18:13 addons-269722 containerd[831]: time="2025-12-06T09:18:13.872611172Z" level=error msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:13 addons-269722 containerd[831]: time="2025-12-06T09:18:13.872702938Z" level=info msg="stop pulling image docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: active requests=0, bytes read=11015"
	
	
	==> coredns [e636e6172c8c93ebe7783047ae4449227f6f37f80a082ff4fd383ebc5d08fdbe] <==
	[INFO] 127.0.0.1:45630 - 42972 "HINFO IN 8857311011245630481.6411694208669158279. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079041457s
	[INFO] 10.244.0.8:51474 - 1148 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002226812s
	[INFO] 10.244.0.8:51474 - 59613 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000209051s
	[INFO] 10.244.0.8:51474 - 58064 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00013173s
	[INFO] 10.244.0.8:51474 - 29072 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084614s
	[INFO] 10.244.0.8:51474 - 28407 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124845s
	[INFO] 10.244.0.8:51474 - 5185 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106747s
	[INFO] 10.244.0.8:51474 - 28903 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097914s
	[INFO] 10.244.0.8:51474 - 44135 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000086701s
	[INFO] 10.244.0.8:42198 - 56025 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124465s
	[INFO] 10.244.0.8:42198 - 58448 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118323s
	[INFO] 10.244.0.8:40240 - 52465 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104193s
	[INFO] 10.244.0.8:40240 - 52746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113004s
	[INFO] 10.244.0.8:49362 - 65347 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126485s
	[INFO] 10.244.0.8:49362 - 110 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216341s
	[INFO] 10.244.0.8:51040 - 59068 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087119s
	[INFO] 10.244.0.8:51040 - 59346 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118565s
	[INFO] 10.244.0.27:48228 - 49165 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000319642s
	[INFO] 10.244.0.27:40396 - 12915 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001198011s
	[INFO] 10.244.0.27:39038 - 53409 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158695s
	[INFO] 10.244.0.27:59026 - 7807 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134321s
	[INFO] 10.244.0.27:32836 - 36351 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085705s
	[INFO] 10.244.0.27:33578 - 24448 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114082s
	[INFO] 10.244.0.27:49566 - 16674 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003361826s
	[INFO] 10.244.0.27:37372 - 21961 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004334216s
	
	
	==> describe nodes <==
	Name:               addons-269722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-269722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-269722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-269722
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-269722"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-269722
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:18:28 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:18:28 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:18:28 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:18:28 +0000   Sat, 06 Dec 2025 09:11:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    addons-269722
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 faaa974faf9d46f8a3b502afcdf78e43
	  System UUID:                faaa974f-af9d-46f8-a3b5-02afcdf78e43
	  Boot ID:                    33004088-aa48-42d5-ac29-91fbfe5a6c68
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5bdddb765-7m79k      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  gadget                      gadget-clpmt                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  gcp-auth                    gcp-auth-78565c9fb4-ncq5v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-ndk8c    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         7m17s
	  kube-system                 amd-gpu-device-plugin-4x5bp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 coredns-66bc5c9577-l7sr8                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m25s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 csi-hostpathplugin-c5bss                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 etcd-addons-269722                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m30s
	  kube-system                 kube-apiserver-addons-269722                250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-controller-manager-addons-269722       200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-proxy-c2km9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-scheduler-addons-269722                100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 metrics-server-85b7d694d7-h2jq2             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m20s
	  kube-system                 nvidia-device-plugin-daemonset-knqvl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 registry-6b586f9694-rbbt6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  kube-system                 registry-creds-764b6fb674-hkrh8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 registry-proxy-hbw67                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  kube-system                 snapshot-controller-7d9fbc56b8-qbp6w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-v9sd5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  local-path-storage          local-path-provisioner-648f6765c9-g86zf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  volcano-system              volcano-admission-6c447bd768-72f4n          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  volcano-system              volcano-controllers-6fd4f85cb8-6xqwk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  volcano-system              volcano-scheduler-76c996c8bf-hld5k          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8bmdx              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m22s                  kube-proxy       
	  Normal  Starting                 7m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m37s (x8 over 7m37s)  kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s (x8 over 7m37s)  kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s (x7 over 7m37s)  kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m30s                  kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m30s                  kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m30s                  kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m30s                  kubelet          Node addons-269722 status is now: NodeReady
	  Normal  RegisteredNode           7m26s                  node-controller  Node addons-269722 event: Registered Node addons-269722 in Controller
	
	
	==> dmesg <==
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008415] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.191948] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 6 09:11] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.103571] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.104285] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.120732] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000039] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.485726] kauditd_printk_skb: 309 callbacks suppressed
	[  +0.266835] kauditd_printk_skb: 464 callbacks suppressed
	[  +0.168903] kauditd_printk_skb: 353 callbacks suppressed
	[  +9.726100] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.920301] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 09:12] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.529426] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.897097] kauditd_printk_skb: 166 callbacks suppressed
	[  +2.318976] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.568626] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.319087] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.000694] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 53 callbacks suppressed
	[Dec 6 09:18] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f7319b640fed7119b3d158c30e3bc2dd128fc0442cd17b3131fd715d76a44c9a] <==
	{"level":"warn","ts":"2025-12-06T09:11:54.843376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60888","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:12:00.349081Z","caller":"traceutil/trace.go:172","msg":"trace[182832154] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"130.322544ms","start":"2025-12-06T09:12:00.218747Z","end":"2025-12-06T09:12:00.349069Z","steps":["trace[182832154] 'process raft request'  (duration: 130.225045ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:02.911329Z","caller":"traceutil/trace.go:172","msg":"trace[364654530] linearizableReadLoop","detail":"{readStateIndex:1170; appliedIndex:1170; }","duration":"112.39164ms","start":"2025-12-06T09:12:02.798919Z","end":"2025-12-06T09:12:02.911310Z","steps":["trace[364654530] 'read index received'  (duration: 112.383502ms)","trace[364654530] 'applied index is now lower than readState.Index'  (duration: 6.997µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:02.912205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.029086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:02.912295Z","caller":"traceutil/trace.go:172","msg":"trace[2137904910] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"113.394912ms","start":"2025-12-06T09:12:02.798891Z","end":"2025-12-06T09:12:02.912286Z","steps":["trace[2137904910] 'agreement among raft nodes before linearized reading'  (duration: 112.675357ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:02.912373Z","caller":"traceutil/trace.go:172","msg":"trace[1452242551] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"297.5818ms","start":"2025-12-06T09:12:02.614786Z","end":"2025-12-06T09:12:02.912368Z","steps":["trace[1452242551] 'process raft request'  (duration: 296.713443ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:11.568524Z","caller":"traceutil/trace.go:172","msg":"trace[574553895] linearizableReadLoop","detail":"{readStateIndex:1209; appliedIndex:1209; }","duration":"261.730098ms","start":"2025-12-06T09:12:11.306778Z","end":"2025-12-06T09:12:11.568508Z","steps":["trace[574553895] 'read index received'  (duration: 261.726617ms)","trace[574553895] 'applied index is now lower than readState.Index'  (duration: 3.06µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.826038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.650735Z","caller":"traceutil/trace.go:172","msg":"trace[1035894961] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1184; }","duration":"343.946028ms","start":"2025-12-06T09:12:11.306774Z","end":"2025-12-06T09:12:11.650720Z","steps":["trace[1035894961] 'agreement among raft nodes before linearized reading'  (duration: 261.814135ms)","trace[1035894961] 'range keys from in-memory index tree'  (duration: 81.970543ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650785Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.306763Z","time spent":"344.009881ms","remote":"127.0.0.1:53040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:12:11.651140Z","caller":"traceutil/trace.go:172","msg":"trace[483765702] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"350.392753ms","start":"2025-12-06T09:12:11.300733Z","end":"2025-12-06T09:12:11.651125Z","steps":["trace[483765702] 'process raft request'  (duration: 267.904896ms)","trace[483765702] 'compare'  (duration: 81.445642ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.651205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.300717Z","time spent":"350.449818ms","remote":"127.0.0.1:53164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<>"}
	{"level":"warn","ts":"2025-12-06T09:12:11.651419Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.416676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651477Z","caller":"traceutil/trace.go:172","msg":"trace[167194031] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"172.473278ms","start":"2025-12-06T09:12:11.478992Z","end":"2025-12-06T09:12:11.651465Z","steps":["trace[167194031] 'agreement among raft nodes before linearized reading'  (duration: 172.38943ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.385049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651660Z","caller":"traceutil/trace.go:172","msg":"trace[1143122093] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"270.440925ms","start":"2025-12-06T09:12:11.381211Z","end":"2025-12-06T09:12:11.651652Z","steps":["trace[1143122093] 'agreement among raft nodes before linearized reading'  (duration: 270.367937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651812Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.784519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651836Z","caller":"traceutil/trace.go:172","msg":"trace[535987253] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1185; }","duration":"298.810243ms","start":"2025-12-06T09:12:11.353018Z","end":"2025-12-06T09:12:11.651829Z","steps":["trace[535987253] 'agreement among raft nodes before linearized reading'  (duration: 298.76303ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:20.929795Z","caller":"traceutil/trace.go:172","msg":"trace[628627548] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"105.667962ms","start":"2025-12-06T09:12:20.824110Z","end":"2025-12-06T09:12:20.929778Z","steps":["trace[628627548] 'process raft request'  (duration: 105.596429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:23.778852Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.603155ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:23.779380Z","caller":"traceutil/trace.go:172","msg":"trace[424992269] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1281; }","duration":"218.131424ms","start":"2025-12-06T09:12:23.561231Z","end":"2025-12-06T09:12:23.779363Z","steps":["trace[424992269] 'range keys from in-memory index tree'  (duration: 217.594054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:26.494846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.642654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:26.495325Z","caller":"traceutil/trace.go:172","msg":"trace[102060551] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1290; }","duration":"113.581468ms","start":"2025-12-06T09:12:26.381729Z","end":"2025-12-06T09:12:26.495310Z","steps":["trace[102060551] 'range keys from in-memory index tree'  (duration: 112.580581ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:13:20.713154Z","caller":"traceutil/trace.go:172","msg":"trace[1259088558] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"103.020152ms","start":"2025-12-06T09:13:20.609588Z","end":"2025-12-06T09:13:20.712608Z","steps":["trace[1259088558] 'process raft request'  (duration: 102.875042ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:18:37.035751Z","caller":"traceutil/trace.go:172","msg":"trace[10856222] transaction","detail":"{read_only:false; response_revision:2013; number_of_response:1; }","duration":"171.241207ms","start":"2025-12-06T09:18:36.864442Z","end":"2025-12-06T09:18:37.035683Z","steps":["trace[10856222] 'process raft request'  (duration: 170.245197ms)"],"step_count":1}
	
	
	==> gcp-auth [2454f875fd20c52fe36c9b202027593ff7fe9e0eeeb2a7c7c46e52f46a87cdd5] <==
	2025/12/06 09:12:46 GCP Auth Webhook started!
	
	
	==> kernel <==
	 09:18:50 up 8 min,  0 users,  load average: 0.28, 0.94, 0.66
	Linux addons-269722 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [95341ea890f7aa882f4bc2a6906002451241d8c5faa071707f5de92b27e20ce7] <==
	W1206 09:11:54.801051       1 logging.go:55] [core] [Channel #314 SubChannel #315]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 09:11:54.824030       1 logging.go:55] [core] [Channel #318 SubChannel #319]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1206 09:11:55.696819       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 09:11:55.696896       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 09:11:55.700788       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	E1206 09:11:55.702607       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	E1206 09:11:55.706446       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	E1206 09:11:55.728420       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	E1206 09:11:55.769800       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	E1206 09:11:55.851437       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	E1206 09:11:56.013743       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	E1206 09:11:56.337177       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.28.243:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.28.243:443: connect: connection refused" logger="UnhandledError"
	W1206 09:11:56.697879       1 handler_proxy.go:99] no RequestInfo found in the context
	W1206 09:11:56.697966       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 09:11:56.697919       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1206 09:11:56.698057       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1206 09:11:56.698065       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 09:11:56.699496       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1206 09:11:57.051192       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [c301895eb03e76a7f98c21fd67491f3e3114e008ac0bc660fb3871dde69fdff8] <==
	I1206 09:11:24.052177       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 09:11:24.052225       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:11:24.052782       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:11:24.052875       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:11:24.053063       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:11:24.053407       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:11:24.053419       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:11:24.053033       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:11:24.057302       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:11:24.057377       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:11:24.058839       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E1206 09:11:30.630523       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1206 09:11:54.027365       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 09:11:54.029817       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 09:11:54.029945       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I1206 09:11:54.030562       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I1206 09:11:54.031228       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
	I1206 09:11:54.031356       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I1206 09:11:54.032064       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I1206 09:11:54.034457       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I1206 09:11:54.034716       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 09:11:54.089426       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1206 09:11:54.109062       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 09:11:55.735277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:11:55.815999       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d9ab1c94b0adcd19eace1b7a10c0f065d7c953fc676839d82393eaab4f0c1819] <==
	I1206 09:11:27.430778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:11:27.531232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:11:27.531444       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.220"]
	E1206 09:11:27.531895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:11:27.678473       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:11:27.678923       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:11:27.679749       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:11:27.716021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:11:27.719059       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:11:27.719117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:27.726703       1 config.go:200] "Starting service config controller"
	I1206 09:11:27.726733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:11:27.726750       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:11:27.726754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:11:27.730726       1 config.go:309] "Starting node config controller"
	I1206 09:11:27.730967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:11:27.730985       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:11:27.726764       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:11:27.736817       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:11:27.827489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:11:27.827527       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:11:27.837415       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [31363d509c1e784ea3123303af98a26bde6cf40b74abff49509bf33b99ca8f00] <==
	E1206 09:11:17.083720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:11:17.083797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:17.083954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:17.085026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:17.085610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:11:17.085977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:17.086442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:17.086495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:17.086552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:11:17.086667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:17.086930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:11:17.939163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:11:17.952354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:11:17.975596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:11:18.009464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:18.049043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:18.084056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:18.094385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:18.198477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:18.257306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:18.287686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:11:18.314012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:18.315115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:11:18.580055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:11:21.477327       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:17:04 addons-269722 kubelet[1529]: E1206 09:17:04.922379    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-6xqwk" podUID="a0ecc385-f289-48f1-93f7-9f6d3f7560c9"
	Dec 06 09:17:08 addons-269722 kubelet[1529]: E1206 09:17:08.922006    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:17:18 addons-269722 kubelet[1529]: E1206 09:17:18.921720    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-6xqwk" podUID="a0ecc385-f289-48f1-93f7-9f6d3f7560c9"
	Dec 06 09:17:20 addons-269722 kubelet[1529]: E1206 09:17:20.922299    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:17:32 addons-269722 kubelet[1529]: E1206 09:17:32.922537    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-6xqwk" podUID="a0ecc385-f289-48f1-93f7-9f6d3f7560c9"
	Dec 06 09:17:34 addons-269722 kubelet[1529]: E1206 09:17:34.922082    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:17:40 addons-269722 kubelet[1529]: E1206 09:17:40.312407    1529 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 06 09:17:40 addons-269722 kubelet[1529]: E1206 09:17:40.312512    1529 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7741462-59ef-4947-ac5d-b5ffab88a570-gcr-creds podName:b7741462-59ef-4947-ac5d-b5ffab88a570 nodeName:}" failed. No retries permitted until 2025-12-06 09:19:42.312497028 +0000 UTC m=+502.503777122 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b7741462-59ef-4947-ac5d-b5ffab88a570-gcr-creds") pod "registry-creds-764b6fb674-hkrh8" (UID: "b7741462-59ef-4947-ac5d-b5ffab88a570") : secret "registry-creds-gcr" not found
	Dec 06 09:17:41 addons-269722 kubelet[1529]: I1206 09:17:41.921706    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4x5bp" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:17:43 addons-269722 kubelet[1529]: I1206 09:17:43.921761    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l7sr8" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:17:45 addons-269722 kubelet[1529]: E1206 09:17:45.922064    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-6xqwk" podUID="a0ecc385-f289-48f1-93f7-9f6d3f7560c9"
	Dec 06 09:17:45 addons-269722 kubelet[1529]: E1206 09:17:45.922689    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:17:57 addons-269722 kubelet[1529]: E1206 09:17:57.922556    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:18:00 addons-269722 kubelet[1529]: E1206 09:18:00.921703    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-hkrh8" podUID="b7741462-59ef-4947-ac5d-b5ffab88a570"
	Dec 06 09:18:02 addons-269722 kubelet[1529]: I1206 09:18:02.922237    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-rbbt6" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:18:04 addons-269722 kubelet[1529]: I1206 09:18:04.177366    1529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="volcano-system/volcano-controllers-6fd4f85cb8-6xqwk" podStartSLOduration=2.798656094 podStartE2EDuration="6m28.177345573s" podCreationTimestamp="2025-12-06 09:11:36 +0000 UTC" firstStartedPulling="2025-12-06 09:11:37.768430032 +0000 UTC m=+17.959710126" lastFinishedPulling="2025-12-06 09:18:03.147119507 +0000 UTC m=+403.338399605" observedRunningTime="2025-12-06 09:18:04.173688302 +0000 UTC m=+404.364968416" watchObservedRunningTime="2025-12-06 09:18:04.177345573 +0000 UTC m=+404.368625689"
	Dec 06 09:18:06 addons-269722 kubelet[1529]: I1206 09:18:06.922370    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-hbw67" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:18:13 addons-269722 kubelet[1529]: E1206 09:18:13.872904    1529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Dec 06 09:18:13 addons-269722 kubelet[1529]: E1206 09:18:13.872960    1529 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Dec 06 09:18:13 addons-269722 kubelet[1529]: E1206 09:18:13.873045    1529 kuberuntime_manager.go:1449] "Unhandled Error" err="container volcano-scheduler start failed in pod volcano-scheduler-76c996c8bf-hld5k_volcano-system(1ddd5808-c6e9-4c72-8c7e-2f29478962f1): ErrImagePull: failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:18:13 addons-269722 kubelet[1529]: E1206 09:18:13.873079    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:18:16 addons-269722 kubelet[1529]: I1206 09:18:16.921766    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-knqvl" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:18:26 addons-269722 kubelet[1529]: E1206 09:18:26.921816    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:18:39 addons-269722 kubelet[1529]: E1206 09:18:39.926468    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-hld5k" podUID="1ddd5808-c6e9-4c72-8c7e-2f29478962f1"
	Dec 06 09:18:43 addons-269722 kubelet[1529]: I1206 09:18:43.924800    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4x5bp" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [a9394a7445ed60a376c7cd3e75aaac67b588412df8710faeea1ea9b282a9b119] <==
	W1206 09:18:25.194566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:27.203765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:27.208728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:29.212345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:29.218307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:31.221562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:31.237013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:33.240847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:33.247164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:35.251363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:35.259759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:37.263866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:37.273353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:39.279171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:39.285157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:41.290185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:41.295767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:43.300235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:43.305768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:45.309982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:45.317754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:47.322121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:47.327510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:49.330778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:18:49.337972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-269722 -n addons-269722
helpers_test.go:269: (dbg) Run:  kubectl --context addons-269722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k registry-creds-764b6fb674-hkrh8 volcano-admission-init-kx7hz volcano-scheduler-76c996c8bf-hld5k
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-269722 describe pod ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k registry-creds-764b6fb674-hkrh8 volcano-admission-init-kx7hz volcano-scheduler-76c996c8bf-hld5k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-269722 describe pod ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k registry-creds-764b6fb674-hkrh8 volcano-admission-init-kx7hz volcano-scheduler-76c996c8bf-hld5k: exit status 1 (65.025607ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kl75g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpn6k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-hkrh8" not found
	Error from server (NotFound): pods "volcano-admission-init-kx7hz" not found
	Error from server (NotFound): pods "volcano-scheduler-76c996c8bf-hld5k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-269722 describe pod ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k registry-creds-764b6fb674-hkrh8 volcano-admission-init-kx7hz volcano-scheduler-76c996c8bf-hld5k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable volcano --alsologtostderr -v=1: (11.773604695s)
--- FAIL: TestAddons/serial/Volcano (374.07s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (491.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-269722 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-269722 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-269722 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9f2e16bd-5c5a-4de7-8925-9e8608d94e2b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-269722 -n addons-269722
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-12-06 09:27:42.886406316 +0000 UTC m=+1036.173783439
addons_test.go:252: (dbg) Run:  kubectl --context addons-269722 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-269722 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-269722/192.168.39.220
Start Time:       Sat, 06 Dec 2025 09:19:42 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.33
IPs:
IP:  10.244.0.33
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tppjg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tppjg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/nginx to addons-269722
Normal   Pulling    4m57s (x5 over 7m59s)   kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     4m56s (x5 over 7m58s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m56s (x5 over 7m58s)   kubelet            Error: ErrImagePull
Normal   BackOff    2m50s (x21 over 7m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m50s (x21 over 7m58s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-269722 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-269722 logs nginx -n default: exit status 1 (69.274521ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-269722 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-269722 -n addons-269722
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 logs -n 25: (1.060811533s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                     ARGS                                                                                                                                                                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-600827                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-345944                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ --download-only -p binary-mirror-098159 --alsologtostderr --binary-mirror http://127.0.0.1:43773 --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ -p binary-mirror-098159                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ addons  │ disable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ start   │ -p addons-269722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:12 UTC │
	│ addons  │ addons-269722 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:18 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ enable headlamp -p addons-269722 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ ip      │ addons-269722 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                               │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                              │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:25 UTC │
	│ addons  │ addons-269722 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:25 UTC │ 06 Dec 25 09:25 UTC │
	│ addons  │ addons-269722 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:25 UTC │ 06 Dec 25 09:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:41.905948  388517 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:41.906056  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906068  388517 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:41.906073  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906290  388517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:10:41.906764  388517 out.go:368] Setting JSON to false
	I1206 09:10:41.907751  388517 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6792,"bootTime":1765005450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:41.907809  388517 start.go:143] virtualization: kvm guest
	I1206 09:10:41.909713  388517 out.go:179] * [addons-269722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:41.911209  388517 notify.go:221] Checking for updates...
	I1206 09:10:41.911229  388517 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:10:41.912645  388517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:41.913886  388517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:10:41.915020  388517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:41.919365  388517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:10:41.920580  388517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:10:41.921823  388517 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:41.950647  388517 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 09:10:41.951784  388517 start.go:309] selected driver: kvm2
	I1206 09:10:41.951797  388517 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:10:41.951808  388517 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:10:41.952432  388517 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:10:41.952640  388517 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:41.952666  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:10:41.952706  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:10:41.952714  388517 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:10:41.952753  388517 start.go:353] cluster config:
	{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:41.952877  388517 iso.go:125] acquiring lock: {Name:mk1a7d442a240aa1785a2e6e751e007c5a8723f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:41.954741  388517 out.go:179] * Starting "addons-269722" primary control-plane node in "addons-269722" cluster
	I1206 09:10:41.955614  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:10:41.955638  388517 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1206 09:10:41.955646  388517 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:41.955737  388517 preload.go:238] Found /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1206 09:10:41.955748  388517 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 09:10:41.956043  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:10:41.956066  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json: {Name:mka83bdbdc23544e613eb52d015ad5fe63a1e910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:41.956183  388517 start.go:360] acquireMachinesLock for addons-269722: {Name:mkc77d1cf752e1546ce7850a29dbe975ae7fa9b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:10:41.956225  388517 start.go:364] duration metric: took 30.995µs to acquireMachinesLock for "addons-269722"
	I1206 09:10:41.956247  388517 start.go:93] Provisioning new machine with config: &{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:10:41.956289  388517 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 09:10:41.957646  388517 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1206 09:10:41.957797  388517 start.go:159] libmachine.API.Create for "addons-269722" (driver="kvm2")
	I1206 09:10:41.957831  388517 client.go:173] LocalClient.Create starting
	I1206 09:10:41.957926  388517 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem
	I1206 09:10:41.993468  388517 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem
	I1206 09:10:42.078767  388517 main.go:143] libmachine: creating domain...
	I1206 09:10:42.078784  388517 main.go:143] libmachine: creating network...
	I1206 09:10:42.080023  388517 main.go:143] libmachine: found existing default network
	I1206 09:10:42.080210  388517 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.080787  388517 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d56770}
	I1206 09:10:42.080910  388517 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-269722</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.086592  388517 main.go:143] libmachine: creating private network mk-addons-269722 192.168.39.0/24...
	I1206 09:10:42.152917  388517 main.go:143] libmachine: private network mk-addons-269722 192.168.39.0/24 created
	I1206 09:10:42.153176  388517 main.go:143] libmachine: <network>
	  <name>mk-addons-269722</name>
	  <uuid>2336c74c-93b2-42b0-890b-3a8a8a25a922</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:fd:c9:1f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.153203  388517 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.153230  388517 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 09:10:42.153244  388517 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.153313  388517 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-383742/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 09:10:42.415061  388517 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa...
	I1206 09:10:42.429309  388517 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk...
	I1206 09:10:42.429369  388517 main.go:143] libmachine: Writing magic tar header
	I1206 09:10:42.429404  388517 main.go:143] libmachine: Writing SSH key tar header
	I1206 09:10:42.429498  388517 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.429571  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722
	I1206 09:10:42.429604  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 (perms=drwx------)
	I1206 09:10:42.429623  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines
	I1206 09:10:42.429636  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines (perms=drwxr-xr-x)
	I1206 09:10:42.429647  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.429656  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube (perms=drwxr-xr-x)
	I1206 09:10:42.429674  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742
	I1206 09:10:42.429704  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742 (perms=drwxrwxr-x)
	I1206 09:10:42.429722  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 09:10:42.429744  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 09:10:42.429758  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 09:10:42.429765  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 09:10:42.429775  388517 main.go:143] libmachine: checking permissions on dir: /home
	I1206 09:10:42.429781  388517 main.go:143] libmachine: skipping /home - not owner
	I1206 09:10:42.429788  388517 main.go:143] libmachine: defining domain...
	I1206 09:10:42.431063  388517 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:42.438342  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:8d:9c:cf in network default
	I1206 09:10:42.438932  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:42.438948  388517 main.go:143] libmachine: starting domain...
	I1206 09:10:42.438952  388517 main.go:143] libmachine: ensuring networks are active...
	I1206 09:10:42.439580  388517 main.go:143] libmachine: Ensuring network default is active
	I1206 09:10:42.439915  388517 main.go:143] libmachine: Ensuring network mk-addons-269722 is active
	I1206 09:10:42.440425  388517 main.go:143] libmachine: getting domain XML...
	I1206 09:10:42.441355  388517 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <uuid>faaa974f-af9d-46f8-a3b5-02afcdf78e43</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f2:80:b2'/>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:8d:9c:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:43.781082  388517 main.go:143] libmachine: waiting for domain to start...
	I1206 09:10:43.782318  388517 main.go:143] libmachine: domain is now running
	I1206 09:10:43.782338  388517 main.go:143] libmachine: waiting for IP...
	I1206 09:10:43.783021  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:43.783369  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:43.783385  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:43.783643  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:43.783696  388517 retry.go:31] will retry after 278.987444ms: waiting for domain to come up
	I1206 09:10:44.064124  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.064595  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.064606  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.064919  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.064957  388517 retry.go:31] will retry after 330.689041ms: waiting for domain to come up
	I1206 09:10:44.397460  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.397947  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.397962  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.398238  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.398277  388517 retry.go:31] will retry after 413.406233ms: waiting for domain to come up
	I1206 09:10:44.812999  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.813581  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.813601  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.813924  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.813970  388517 retry.go:31] will retry after 440.754763ms: waiting for domain to come up
	I1206 09:10:45.256730  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.257210  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.257228  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.257514  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.257556  388517 retry.go:31] will retry after 717.110818ms: waiting for domain to come up
	I1206 09:10:45.975902  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.976408  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.976424  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.976689  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.976722  388517 retry.go:31] will retry after 589.246662ms: waiting for domain to come up
	I1206 09:10:46.567419  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:46.567953  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:46.567973  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:46.568280  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:46.568326  388517 retry.go:31] will retry after 857.836192ms: waiting for domain to come up
	I1206 09:10:47.427627  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:47.428082  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:47.428097  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:47.428421  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:47.428475  388517 retry.go:31] will retry after 969.137484ms: waiting for domain to come up
	I1206 09:10:48.399647  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:48.400199  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:48.400215  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:48.400562  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:48.400615  388517 retry.go:31] will retry after 1.740343977s: waiting for domain to come up
	I1206 09:10:50.143512  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:50.143999  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:50.144014  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:50.144329  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:50.144363  388517 retry.go:31] will retry after 2.180103707s: waiting for domain to come up
	I1206 09:10:52.325956  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:52.326470  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:52.326485  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:52.326823  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:52.326870  388517 retry.go:31] will retry after 2.821995124s: waiting for domain to come up
	I1206 09:10:55.151850  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:55.152380  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:55.152397  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:55.152818  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:55.152881  388517 retry.go:31] will retry after 2.278330426s: waiting for domain to come up
	I1206 09:10:57.432300  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:57.432813  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:57.432829  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:57.433107  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:57.433144  388517 retry.go:31] will retry after 3.558016636s: waiting for domain to come up
	I1206 09:11:00.994805  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995368  388517 main.go:143] libmachine: domain addons-269722 has current primary IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995386  388517 main.go:143] libmachine: found domain IP: 192.168.39.220
	I1206 09:11:00.995394  388517 main.go:143] libmachine: reserving static IP address...
	I1206 09:11:00.995774  388517 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-269722", mac: "52:54:00:f2:80:b2", ip: "192.168.39.220"} in network mk-addons-269722
	I1206 09:11:01.169742  388517 main.go:143] libmachine: reserved static IP address 192.168.39.220 for domain addons-269722
	I1206 09:11:01.169781  388517 main.go:143] libmachine: waiting for SSH...
	I1206 09:11:01.169788  388517 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 09:11:01.172807  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173481  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.173514  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173694  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.173964  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.173979  388517 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 09:11:01.272210  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.272513  388517 main.go:143] libmachine: domain creation complete
	I1206 09:11:01.273828  388517 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:01.275801  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276155  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.276181  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276321  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.276511  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.276520  388517 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:01.373100  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 09:11:01.373130  388517 buildroot.go:166] provisioning hostname "addons-269722"
	I1206 09:11:01.375944  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376345  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.376372  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376608  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.376841  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.376854  388517 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-269722 && echo "addons-269722" | sudo tee /etc/hostname
	I1206 09:11:01.490874  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-269722
	
	I1206 09:11:01.493600  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.493995  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.494015  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.494204  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.494457  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.494481  388517 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-269722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-269722/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-269722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:01.601899  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.601925  388517 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-383742/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-383742/.minikube}
	I1206 09:11:01.601941  388517 buildroot.go:174] setting up certificates
	I1206 09:11:01.601950  388517 provision.go:84] configureAuth start
	I1206 09:11:01.604648  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.605083  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.605108  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607340  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607665  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.607684  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607799  388517 provision.go:143] copyHostCerts
	I1206 09:11:01.607857  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/ca.pem (1082 bytes)
	I1206 09:11:01.608028  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/cert.pem (1123 bytes)
	I1206 09:11:01.608130  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/key.pem (1675 bytes)
	I1206 09:11:01.608197  388517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem org=jenkins.addons-269722 san=[127.0.0.1 192.168.39.220 addons-269722 localhost minikube]
	I1206 09:11:01.761887  388517 provision.go:177] copyRemoteCerts
	I1206 09:11:01.761947  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:01.764212  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764543  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.764581  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764716  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:01.844794  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:11:01.873452  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:01.901904  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:11:01.930285  388517 provision.go:87] duration metric: took 328.321351ms to configureAuth
	I1206 09:11:01.930311  388517 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:11:01.930501  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:01.930521  388517 machine.go:97] duration metric: took 656.676665ms to provisionDockerMachine
	I1206 09:11:01.930531  388517 client.go:176] duration metric: took 19.972691553s to LocalClient.Create
	I1206 09:11:01.930551  388517 start.go:167] duration metric: took 19.97275355s to libmachine.API.Create "addons-269722"
	I1206 09:11:01.930596  388517 start.go:293] postStartSetup for "addons-269722" (driver="kvm2")
	I1206 09:11:01.930611  388517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:01.930658  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:01.933229  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933604  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.933625  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933768  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.013069  388517 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:02.017563  388517 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:11:02.017583  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/addons for local assets ...
	I1206 09:11:02.017651  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/files for local assets ...
	I1206 09:11:02.017684  388517 start.go:296] duration metric: took 87.076069ms for postStartSetup
	I1206 09:11:02.020584  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.020944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.020967  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.021198  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:11:02.021364  388517 start.go:128] duration metric: took 20.065065791s to createHost
	I1206 09:11:02.023485  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023794  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.023813  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023959  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:02.024173  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:02.024185  388517 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:11:02.121919  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765012262.085933657
	
	I1206 09:11:02.121936  388517 fix.go:216] guest clock: 1765012262.085933657
	I1206 09:11:02.121942  388517 fix.go:229] Guest: 2025-12-06 09:11:02.085933657 +0000 UTC Remote: 2025-12-06 09:11:02.021381724 +0000 UTC m=+20.161953678 (delta=64.551933ms)
	I1206 09:11:02.121960  388517 fix.go:200] guest clock delta is within tolerance: 64.551933ms
	I1206 09:11:02.121974  388517 start.go:83] releasing machines lock for "addons-269722", held for 20.165731842s
	I1206 09:11:02.124594  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.124944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.124973  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.125474  388517 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:02.125592  388517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:02.128433  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128746  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.128763  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128921  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.128989  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129445  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.129480  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129624  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.204247  388517 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:02.228305  388517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:02.234563  388517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:02.234633  388517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:02.260428  388517 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:02.260454  388517 start.go:496] detecting cgroup driver to use...
	I1206 09:11:02.260528  388517 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:11:02.297166  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:11:02.315488  388517 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:02.315555  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:02.332111  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:02.347076  388517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:02.491701  388517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:02.703514  388517 docker.go:234] disabling docker service ...
	I1206 09:11:02.703604  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:02.719452  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:02.733466  388517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:02.882667  388517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:03.020738  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:03.036166  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:03.057682  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:11:03.069874  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:11:03.081945  388517 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:11:03.082022  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:11:03.094105  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.106250  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:11:03.117968  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.130001  388517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:03.142658  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:11:03.154729  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:11:03.166983  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:11:03.178658  388517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:03.188759  388517 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 09:11:03.188803  388517 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 09:11:03.211314  388517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:03.224103  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:03.361032  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:03.404281  388517 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:11:03.404385  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:03.409523  388517 retry.go:31] will retry after 1.49666292s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1206 09:11:04.906469  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:04.912677  388517 start.go:564] Will wait 60s for crictl version
	I1206 09:11:04.912759  388517 ssh_runner.go:195] Run: which crictl
	I1206 09:11:04.916909  388517 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:11:04.952021  388517 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1206 09:11:04.952114  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:04.979176  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:05.046042  388517 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 1.7.23 ...
	I1206 09:11:05.113332  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113713  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:05.113733  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113904  388517 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:05.118728  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:05.134279  388517 kubeadm.go:884] updating cluster {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:05.134389  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:11:05.134436  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:05.163245  388517 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 09:11:05.163338  388517 ssh_runner.go:195] Run: which lz4
	I1206 09:11:05.167791  388517 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 09:11:05.172645  388517 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 09:11:05.172675  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (339763354 bytes)
	I1206 09:11:06.408453  388517 containerd.go:563] duration metric: took 1.240701247s to copy over tarball
	I1206 09:11:06.408534  388517 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 09:11:07.824785  388517 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.41620911s)
	I1206 09:11:07.824829  388517 containerd.go:570] duration metric: took 1.416348198s to extract the tarball
	I1206 09:11:07.824837  388517 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 09:11:07.876750  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.019449  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:08.055912  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.089979  388517 retry.go:31] will retry after 204.800226ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:08Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1206 09:11:08.295519  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.332986  388517 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:11:08.333019  388517 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:08.333035  388517 kubeadm.go:935] updating node { 192.168.39.220 8443 v1.34.2 containerd true true} ...
	I1206 09:11:08.333199  388517 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-269722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:11:08.333263  388517 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:11:08.363626  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:08.363652  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:08.363671  388517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:08.363694  388517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-269722 NodeName:addons-269722 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:08.363802  388517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-269722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:08.363898  388517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:08.376320  388517 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:08.376400  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:08.387974  388517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1206 09:11:08.408073  388517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:08.428105  388517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1206 09:11:08.448237  388517 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:08.452207  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:08.466654  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.612134  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:08.650190  388517 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722 for IP: 192.168.39.220
	I1206 09:11:08.650221  388517 certs.go:195] generating shared ca certs ...
	I1206 09:11:08.650248  388517 certs.go:227] acquiring lock for ca certs: {Name:mkf308ce4033be42aa40d533f6774edcee747959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.650426  388517 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key
	I1206 09:11:08.753472  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt ...
	I1206 09:11:08.753502  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt: {Name:mk0bc547e2c4a3698a714e2e67e37fe0843ac532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753663  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key ...
	I1206 09:11:08.753675  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key: {Name:mk257636778cdf81faeb62cfd641c994d65ea561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753763  388517 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key
	I1206 09:11:08.944161  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt ...
	I1206 09:11:08.944193  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt: {Name:mk7a27f62c25f1293f691b851f1b366a8491b851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944357  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key ...
	I1206 09:11:08.944369  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key: {Name:mk0dbe369ea38e824cffd9d96349344507b04d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944442  388517 certs.go:257] generating profile certs ...
	I1206 09:11:08.944507  388517 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key
	I1206 09:11:08.944522  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt with IP's: []
	I1206 09:11:09.004417  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt ...
	I1206 09:11:09.004443  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: {Name:mkc7ee580529997a0158c489e5de6aaaab4381ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004577  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key ...
	I1206 09:11:09.004587  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key: {Name:mk6aea14e5a790daaff4a5aa584541cbd36fa7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004653  388517 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9
	I1206 09:11:09.004671  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I1206 09:11:09.103453  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 ...
	I1206 09:11:09.103485  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9: {Name:mkb69edd53ea15cc714b2e6dcd35fb9bda8e0a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103642  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 ...
	I1206 09:11:09.103658  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9: {Name:mkbef642e3d05cf341f2d82d3597bab753cd2174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103728  388517 certs.go:382] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt
	I1206 09:11:09.103816  388517 certs.go:386] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key
	I1206 09:11:09.103876  388517 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key
	I1206 09:11:09.103896  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt with IP's: []
	I1206 09:11:09.195473  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt ...
	I1206 09:11:09.195504  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt: {Name:mk1ed5a652995aaac584bd788ffca22c7d7d4179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195645  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key ...
	I1206 09:11:09.195657  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key: {Name:mkb0905602ecfb2d53502a566a95204a8f98bd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195846  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 09:11:09.195899  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:09.195942  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:09.195967  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:09.196610  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:09.227924  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:09.257244  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:09.287169  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:11:09.319682  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:11:09.354785  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:11:09.391203  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:09.419761  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:11:09.448250  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:09.476343  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:09.495953  388517 ssh_runner.go:195] Run: openssl version
	I1206 09:11:09.502134  388517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.512996  388517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:09.524111  388517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529273  388517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529325  388517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.536780  388517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:09.547642  388517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:09.558961  388517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:09.563664  388517 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:09.563723  388517 kubeadm.go:401] StartCluster: {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:09.563812  388517 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:09.563854  388517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:09.597231  388517 cri.go:89] found id: ""
	I1206 09:11:09.597295  388517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:09.609197  388517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:09.619916  388517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:09.631012  388517 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:09.631028  388517 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:09.631067  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:09.641398  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:09.641442  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:09.652328  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:09.662630  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:09.662683  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:09.673582  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.683944  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:09.683997  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.694924  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:09.705284  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:09.705332  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:09.716270  388517 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 09:11:09.765023  388517 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:09.765245  388517 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:09.858054  388517 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:09.858229  388517 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:09.858396  388517 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:09.865139  388517 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:09.920280  388517 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:09.920378  388517 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:09.920462  388517 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:10.105985  388517 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:10.865814  388517 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:10.897033  388517 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:11.249180  388517 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:11.405265  388517 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:11.405459  388517 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.595783  388517 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:11.595930  388517 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.685113  388517 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:11.795320  388517 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:12.056322  388517 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:12.057602  388517 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:12.245522  388517 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:12.344100  388517 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:12.481696  388517 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:12.805057  388517 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:12.987909  388517 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:12.988354  388517 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:12.990637  388517 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:12.992591  388517 out.go:252]   - Booting up control plane ...
	I1206 09:11:12.992683  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:12.992757  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:12.992829  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:13.009376  388517 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:13.009528  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:13.016083  388517 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:13.016157  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:13.016213  388517 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:13.195314  388517 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:13.195457  388517 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:13.696155  388517 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.400144ms
	I1206 09:11:13.701317  388517 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:13.701412  388517 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.220:8443/livez
	I1206 09:11:13.701516  388517 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:13.701609  388517 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:15.925448  388517 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.2258309s
	I1206 09:11:17.097937  388517 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.399298925s
	I1206 09:11:19.199961  388517 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502821586s
	I1206 09:11:19.217728  388517 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:19.231172  388517 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:19.244842  388517 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:19.245047  388517 kubeadm.go:319] [mark-control-plane] Marking the node addons-269722 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:19.255597  388517 kubeadm.go:319] [bootstrap-token] Using token: tnc6di.0o5js773tkjcekar
	I1206 09:11:19.256827  388517 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:19.256963  388517 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:19.261388  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:19.269766  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:19.273599  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:19.281952  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:19.288853  388517 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:19.605592  388517 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:20.070227  388517 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:20.605934  388517 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:20.606844  388517 kubeadm.go:319] 
	I1206 09:11:20.606929  388517 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:20.606938  388517 kubeadm.go:319] 
	I1206 09:11:20.607026  388517 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:20.607033  388517 kubeadm.go:319] 
	I1206 09:11:20.607064  388517 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:20.607146  388517 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:20.607224  388517 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:20.607234  388517 kubeadm.go:319] 
	I1206 09:11:20.607327  388517 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:20.607350  388517 kubeadm.go:319] 
	I1206 09:11:20.607426  388517 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:20.607434  388517 kubeadm.go:319] 
	I1206 09:11:20.607510  388517 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:20.607639  388517 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:20.607758  388517 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:20.607774  388517 kubeadm.go:319] 
	I1206 09:11:20.607894  388517 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:20.607992  388517 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:20.608007  388517 kubeadm.go:319] 
	I1206 09:11:20.608129  388517 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608283  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 \
	I1206 09:11:20.608307  388517 kubeadm.go:319] 	--control-plane 
	I1206 09:11:20.608316  388517 kubeadm.go:319] 
	I1206 09:11:20.608391  388517 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:20.608397  388517 kubeadm.go:319] 
	I1206 09:11:20.608494  388517 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608638  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 
	I1206 09:11:20.609835  388517 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:20.609893  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:20.609910  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:20.611407  388517 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:11:20.612520  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:11:20.630100  388517 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:11:20.652382  388517 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:20.652515  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:20.652537  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-269722 minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-269722 minikube.k8s.io/primary=true
	I1206 09:11:20.694430  388517 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:20.784013  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.284280  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.784935  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.284329  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.784096  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.284134  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.784412  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.285006  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.365500  388517 kubeadm.go:1114] duration metric: took 3.713041621s to wait for elevateKubeSystemPrivileges
	I1206 09:11:24.365554  388517 kubeadm.go:403] duration metric: took 14.801837471s to StartCluster
	I1206 09:11:24.365583  388517 settings.go:142] acquiring lock: {Name:mk5046213dcb1abe0d7fe7b15722aa4884a98be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.365735  388517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:11:24.366166  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/kubeconfig: {Name:mka1b03c13e1e115a4ba1af8cb483b83d246825c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.366385  388517 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:11:24.366393  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:11:24.366467  388517 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:11:24.366579  388517 addons.go:70] Setting yakd=true in profile "addons-269722"
	I1206 09:11:24.366593  388517 addons.go:70] Setting inspektor-gadget=true in profile "addons-269722"
	I1206 09:11:24.366594  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366606  388517 addons.go:239] Setting addon yakd=true in "addons-269722"
	I1206 09:11:24.366612  388517 addons.go:239] Setting addon inspektor-gadget=true in "addons-269722"
	I1206 09:11:24.366637  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366644  388517 addons.go:70] Setting default-storageclass=true in profile "addons-269722"
	I1206 09:11:24.366651  388517 addons.go:70] Setting gcp-auth=true in profile "addons-269722"
	I1206 09:11:24.366663  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-269722"
	I1206 09:11:24.366682  388517 mustload.go:66] Loading cluster: addons-269722
	I1206 09:11:24.366726  388517 addons.go:70] Setting registry-creds=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting cloud-spanner=true in profile "addons-269722"
	I1206 09:11:24.366778  388517 addons.go:239] Setting addon registry-creds=true in "addons-269722"
	I1206 09:11:24.366781  388517 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-269722"
	I1206 09:11:24.366784  388517 addons.go:239] Setting addon cloud-spanner=true in "addons-269722"
	I1206 09:11:24.366787  388517 addons.go:70] Setting storage-provisioner=true in profile "addons-269722"
	I1206 09:11:24.366800  388517 addons.go:239] Setting addon storage-provisioner=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366818  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366819  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366821  388517 addons.go:70] Setting metrics-server=true in profile "addons-269722"
	I1206 09:11:24.366836  388517 addons.go:239] Setting addon metrics-server=true in "addons-269722"
	I1206 09:11:24.366850  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366901  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366979  388517 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.367005  388517 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-269722"
	I1206 09:11:24.367028  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367504  388517 addons.go:70] Setting registry=true in profile "addons-269722"
	I1206 09:11:24.367531  388517 addons.go:239] Setting addon registry=true in "addons-269722"
	I1206 09:11:24.367561  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367879  388517 addons.go:70] Setting ingress=true in profile "addons-269722"
	I1206 09:11:24.367904  388517 addons.go:239] Setting addon ingress=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367940  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367975  388517 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-269722"
	I1206 09:11:24.367998  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-269722"
	I1206 09:11:24.368012  388517 addons.go:70] Setting volcano=true in profile "addons-269722"
	I1206 09:11:24.368028  388517 addons.go:239] Setting addon volcano=true in "addons-269722"
	I1206 09:11:24.368051  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368065  388517 addons.go:70] Setting volumesnapshots=true in profile "addons-269722"
	I1206 09:11:24.368083  388517 addons.go:239] Setting addon volumesnapshots=true in "addons-269722"
	I1206 09:11:24.368108  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368318  388517 addons.go:70] Setting ingress-dns=true in profile "addons-269722"
	I1206 09:11:24.368334  388517 addons.go:239] Setting addon ingress-dns=true in "addons-269722"
	I1206 09:11:24.368504  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368582  388517 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-269722"
	I1206 09:11:24.368650  388517 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:24.368672  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368873  388517 out.go:179] * Verifying Kubernetes components...
	I1206 09:11:24.366646  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.370225  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:24.371769  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.373754  388517 addons.go:239] Setting addon default-storageclass=true in "addons-269722"
	I1206 09:11:24.373789  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.374301  388517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:11:24.374379  388517 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:11:24.375268  388517 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:11:24.375275  388517 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:11:24.375328  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:24.375343  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:11:24.376013  388517 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:11:24.376046  388517 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:24.376074  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:11:24.376035  388517 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:11:24.376134  388517 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-269722"
	I1206 09:11:24.376581  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.376790  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:11:24.376809  388517 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:11:24.376827  388517 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.376841  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:11:24.376847  388517 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:11:24.377596  388517 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:24.377612  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:11:24.378229  388517 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:11:24.378237  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:11:24.378252  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:11:24.378268  388517 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:11:24.378231  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.378298  388517 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1206 09:11:24.378904  388517 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:11:24.378904  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:11:24.378253  388517 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:11:24.379492  388517 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.379507  388517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:11:24.379650  388517 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:11:24.379665  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:11:24.379672  388517 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:24.379683  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:11:24.380334  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:24.380373  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:11:24.380344  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:11:24.380559  388517 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:11:24.380561  388517 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:11:24.381552  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:11:24.381577  388517 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1206 09:11:24.382302  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.382322  388517 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:24.382342  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:11:24.384119  388517 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1206 09:11:24.384134  388517 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:11:24.385853  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.386682  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:11:24.386986  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:24.387009  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:11:24.387404  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.387763  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.387799  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388004  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388093  388517 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:11:24.388126  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388701  388517 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:24.388724  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1206 09:11:24.389099  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.389150  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.389220  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:11:24.389288  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:24.389303  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:11:24.389924  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.389981  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390249  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390264  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390288  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390293  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390722  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.390908  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390941  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391542  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:11:24.391835  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.392214  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.392478  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.393141  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394085  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394128  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394319  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394473  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394510  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394522  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394539  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394585  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:11:24.394628  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394751  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.395613  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396316  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396359  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.396795  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.396833  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397321  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397322  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397417  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397434  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397472  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397481  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397505  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397761  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:11:24.397813  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397879  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398815  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:11:24.398876  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:11:24.398990  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399146  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399416  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399466  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399501  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399518  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399553  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399720  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.399930  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.400166  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.400198  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.400399  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.401986  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402373  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.402406  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402558  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	W1206 09:11:24.544745  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544776  388517 retry.go:31] will retry after 167.524935ms: ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.544834  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544842  388517 retry.go:31] will retry after 337.340492ms: ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.586807  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.586836  388517 retry.go:31] will retry after 361.026308ms: ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.720251  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:24.720260  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:11:24.915042  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.943642  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.946926  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:25.098136  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:25.119770  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:11:25.119795  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:11:25.208175  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:25.224407  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:11:25.224432  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:11:25.225309  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:25.232666  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:11:25.232682  388517 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:11:25.246755  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:11:25.246777  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:11:25.247663  388517 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:11:25.247683  388517 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:11:25.270838  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:25.331361  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:25.449965  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:25.469046  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:25.613424  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:11:25.613456  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:11:25.633923  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:11:25.633954  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:11:25.657079  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:11:25.657110  388517 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:11:25.695667  388517 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:25.695693  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:11:25.696553  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:25.756474  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:11:25.756502  388517 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:11:26.160704  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:11:26.160736  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:11:26.284633  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:11:26.284662  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:11:26.286985  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:26.434395  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:11:26.434422  388517 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:11:26.465197  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.465225  388517 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:11:26.661217  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.661249  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:11:26.705778  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.774501  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:11:26.774527  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:11:26.849719  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.906080  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:11:26.906136  388517 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:11:27.000268  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:11:27.000294  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:11:27.610778  388517 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:27.610815  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:11:27.800583  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:11:27.800607  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:11:27.882544  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:28.272413  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:11:28.272451  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:11:28.298383  388517 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.578087161s)
	I1206 09:11:28.298435  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.38335524s)
	I1206 09:11:28.298380  388517 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.578018491s)
	I1206 09:11:28.298514  388517 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 09:11:28.298551  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.354877639s)
	I1206 09:11:28.298640  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.351685893s)
	I1206 09:11:28.299174  388517 node_ready.go:35] waiting up to 6m0s for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373103  388517 node_ready.go:49] node "addons-269722" is "Ready"
	I1206 09:11:28.373131  388517 node_ready.go:38] duration metric: took 73.939285ms for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373146  388517 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:28.373191  388517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:28.564603  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:11:28.564627  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:11:28.805525  388517 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-269722" context rescaled to 1 replicas
	I1206 09:11:28.892887  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:11:28.892912  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:11:29.154236  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:29.154271  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:11:29.383179  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:31.838578  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.740399341s)
	I1206 09:11:31.842964  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:11:31.846059  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846625  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:31.846661  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846877  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:32.206384  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:11:32.398884  388517 addons.go:239] Setting addon gcp-auth=true in "addons-269722"
	I1206 09:11:32.398959  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:32.401192  388517 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:11:32.404036  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404508  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:32.404543  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404739  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:33.380508  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.172285689s)
	I1206 09:11:33.380567  388517 addons.go:495] Verifying addon ingress=true in "addons-269722"
	I1206 09:11:33.380566  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.155226513s)
	I1206 09:11:33.380618  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.109753242s)
	I1206 09:11:33.382778  388517 out.go:179] * Verifying ingress addon...
	I1206 09:11:33.384997  388517 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:11:33.394151  388517 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:11:33.394167  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:33.983745  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.442405  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.961428  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.544843  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.959086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.477596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.933661  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.492983  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.907682  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.464342  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.476878  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.145459322s)
	I1206 09:11:38.476953  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.026949113s)
	I1206 09:11:38.477048  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.007974684s)
	I1206 09:11:38.477116  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.780538742s)
	I1206 09:11:38.477233  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.190220804s)
	I1206 09:11:38.477253  388517 addons.go:495] Verifying addon registry=true in "addons-269722"
	I1206 09:11:38.477312  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.77149962s)
	I1206 09:11:38.477336  388517 addons.go:495] Verifying addon metrics-server=true in "addons-269722"
	I1206 09:11:38.477363  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.627610125s)
	I1206 09:11:38.477525  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.594927288s)
	I1206 09:11:38.477544  388517 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.104332654s)
	I1206 09:11:38.477571  388517 api_server.go:72] duration metric: took 14.11116064s to wait for apiserver process to appear ...
	I1206 09:11:38.477583  388517 api_server.go:88] waiting for apiserver healthz status ...
	W1206 09:11:38.477581  388517 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477604  388517 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1206 09:11:38.477604  388517 retry.go:31] will retry after 298.178363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477795  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.094573264s)
	I1206 09:11:38.477823  388517 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:38.477842  388517 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.076624226s)
	I1206 09:11:38.478884  388517 out.go:179] * Verifying registry addon...
	I1206 09:11:38.478890  388517 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-269722 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:11:38.479684  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:38.479686  388517 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:11:38.481128  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:11:38.482570  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:11:38.482875  388517 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:11:38.483935  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:11:38.483956  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:11:38.542927  388517 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1206 09:11:38.560082  388517 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:11:38.560109  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:38.560250  388517 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:11:38.560266  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:38.564812  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:11:38.564836  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:11:38.577730  388517 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:38.577765  388517 api_server.go:131] duration metric: took 100.173477ms to wait for apiserver health ...
	I1206 09:11:38.577777  388517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:38.641466  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.641493  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:11:38.668346  388517 system_pods.go:59] 20 kube-system pods found
	I1206 09:11:38.668390  388517 system_pods.go:61] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.668407  388517 system_pods.go:61] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.668417  388517 system_pods.go:61] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.668435  388517 system_pods.go:61] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.668450  388517 system_pods.go:61] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.668460  388517 system_pods.go:61] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.668469  388517 system_pods.go:61] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.668476  388517 system_pods.go:61] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.668484  388517 system_pods.go:61] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.668493  388517 system_pods.go:61] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.668501  388517 system_pods.go:61] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.668508  388517 system_pods.go:61] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.668520  388517 system_pods.go:61] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.668526  388517 system_pods.go:61] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.668535  388517 system_pods.go:61] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.668543  388517 system_pods.go:61] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.668558  388517 system_pods.go:61] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.668574  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668644  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668650  388517 system_pods.go:61] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.668660  388517 system_pods.go:74] duration metric: took 90.874732ms to wait for pod list to return data ...
	I1206 09:11:38.668672  388517 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:38.705679  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.776568  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:38.781850  388517 default_sa.go:45] found service account: "default"
	I1206 09:11:38.781885  388517 default_sa.go:55] duration metric: took 113.206818ms for default service account to be created ...
	I1206 09:11:38.781896  388517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:38.893236  388517 system_pods.go:86] 20 kube-system pods found
	I1206 09:11:38.893269  388517 system_pods.go:89] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.893310  388517 system_pods.go:89] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.893318  388517 system_pods.go:89] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.893328  388517 system_pods.go:89] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.893334  388517 system_pods.go:89] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.893340  388517 system_pods.go:89] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.893344  388517 system_pods.go:89] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.893348  388517 system_pods.go:89] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.893352  388517 system_pods.go:89] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.893357  388517 system_pods.go:89] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.893361  388517 system_pods.go:89] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.893364  388517 system_pods.go:89] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.893369  388517 system_pods.go:89] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.893374  388517 system_pods.go:89] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.893379  388517 system_pods.go:89] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.893383  388517 system_pods.go:89] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.893389  388517 system_pods.go:89] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.893395  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893400  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893403  388517 system_pods.go:89] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.893410  388517 system_pods.go:126] duration metric: took 111.509411ms to wait for k8s-apps to be running ...
	I1206 09:11:38.893420  388517 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:11:38.893463  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:39.039991  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.105053  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.105115  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.435086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.577305  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.578361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.891557  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.023055  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.023335  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.299367  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.593645009s)
	I1206 09:11:40.300442  388517 addons.go:495] Verifying addon gcp-auth=true in "addons-269722"
	I1206 09:11:40.302591  388517 out.go:179] * Verifying gcp-auth addon...
	I1206 09:11:40.304667  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:11:40.334052  388517 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:11:40.334086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.389629  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.490307  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.490431  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.813628  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.836756  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.060127251s)
	I1206 09:11:40.836796  388517 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.943309249s)
	I1206 09:11:40.836822  388517 system_svc.go:56] duration metric: took 1.943395217s WaitForService to wait for kubelet
	I1206 09:11:40.836835  388517 kubeadm.go:587] duration metric: took 16.470422509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:40.836870  388517 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:11:40.843939  388517 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 09:11:40.843963  388517 node_conditions.go:123] node cpu capacity is 2
	I1206 09:11:40.843980  388517 node_conditions.go:105] duration metric: took 7.101649ms to run NodePressure ...
	I1206 09:11:40.844002  388517 start.go:242] waiting for startup goroutines ...
	I1206 09:11:40.890430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.986853  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.992475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.355777  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.389062  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.487963  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.489146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.808891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.889779  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.985833  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.987429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.308166  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.409444  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.510304  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.511035  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.809432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.888458  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.984315  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.987586  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.308446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.388943  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.496391  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.496607  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.808230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.888549  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.984398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.986840  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.312899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.514152  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.514383  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.515204  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.811435  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.888384  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.984563  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.986735  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.307401  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.388721  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.486271  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.488952  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.808083  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.888466  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.985838  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.987005  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.309162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.390486  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.484411  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.486023  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.809473  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.888547  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.984691  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.987824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.308194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.388621  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.488407  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.488489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.808350  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.984429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.986654  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.308303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.391026  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.664162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.666762  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.808417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.888241  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.983979  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.986690  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.308241  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.388925  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.484568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.486742  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.809515  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.889646  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.987428  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.988527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.366787  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.389057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.486489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.487907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.810176  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.910430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.984648  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.992028  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.319081  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.388999  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.489012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:51.492499  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.808942  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.896270  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.990446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.992371  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.309057  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.389352  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.484414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.486682  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.809190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.888338  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.991907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.992417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.307785  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.390249  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.484717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.486614  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:53.810677  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.889084  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.987650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.990484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.315414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.395125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.494235  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.494236  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.824289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.888711  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.984659  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.987146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.308481  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.390618  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.484329  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.485893  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.809298  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.895192  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.989404  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.993237  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.311289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.389393  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.487349  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.487525  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.808606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.889213  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.985510  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.991535  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.308723  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.388636  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.488790  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:57.490213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.809073  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.887830  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.984304  388517 kapi.go:107] duration metric: took 19.503171238s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:11:57.987671  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.309052  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.389257  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:58.490899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.809457  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.890577  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.025290  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.309296  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.392111  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.492783  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.807475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.892512  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.986432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.357752  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.391649  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.485367  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.809392  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.887883  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.986127  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.312877  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.413507  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.486873  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.809042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.889057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.986042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.311892  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.390027  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.491375  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.923841  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.927183  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.986095  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.309017  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.390050  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.486194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.812456  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.892317  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.986695  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.308544  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.389102  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.486496  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.810301  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.986924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.308837  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.390825  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.485772  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.807540  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.888733  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.985799  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.310889  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.389329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.492425  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.808561  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.888635  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.985484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.309758  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.390275  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.486771  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.807681  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.888485  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.987584  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.309272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.388617  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.487646  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.809312  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.888519  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.988459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.309597  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.411374  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:09.487378  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.812712  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.912033  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.012090  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.308609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.389736  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.488553  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.808609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.893781  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.986159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.669172  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.670324  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.671190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.811594  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.892535  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.985928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.310097  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.390596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.489116  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.809321  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.890619  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.987653  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.309120  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.388316  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.488650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.808316  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.889333  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.986213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.308276  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.388283  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.487207  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.808143  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.888955  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.986279  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.309037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.388329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.488214  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.810501  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.896511  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.986845  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.307928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.390728  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.485976  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.816944  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.970568  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.988372  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.312911  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.390564  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.486836  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.811792  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.891576  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.988049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.309919  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.388844  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.486086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.809596  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.890914  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.986230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.310480  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.410702  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.486633  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.807918  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.888811  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.987072  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.309606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.412057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.512925  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.817199  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.949254  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.990626  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.312159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.389204  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.488639  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.810891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.888759  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.988415  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.309245  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.391268  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.486340  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.808382  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.889770  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.988997  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.309823  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.388910  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.489579  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.810562  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.889125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.986750  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.308898  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.389306  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.486339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.809381  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.888322  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.987056  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.309252  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.388372  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.486924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.810099  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.891569  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.993945  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.314253  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.503975  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.504104  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.811809  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.889063  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.990570  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.308661  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.388783  388517 kapi.go:107] duration metric: took 54.003785227s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:12:27.539433  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.808824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.987339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.311281  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.487383  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.810397  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.990303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.309345  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.488470  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.811844  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.987408  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.311108  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.487049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.807650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.986406  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.309915  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.486400  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.814032  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.989103  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.311817  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.486527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.808601  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.989352  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.309084  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.486427  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.809272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.986717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:34.308891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:34.486989  388517 kapi.go:107] duration metric: took 56.004420234s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:12:34.808808  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.310012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.808588  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.309169  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.808993  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.310066  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.808459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.308629  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.811741  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.309361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.809037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.308704  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.808398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.307791  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.808294  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.308956  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.809502  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.307669  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.810175  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.309568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.809320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.309320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.807962  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.311821  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.808138  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:47.308750  388517 kapi.go:107] duration metric: took 1m7.004080739s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:12:47.309965  388517 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-269722 cluster.
	I1206 09:12:47.310907  388517 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:12:47.312086  388517 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:12:47.313288  388517 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, registry-creds, volcano, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1206 09:12:47.314294  388517 addons.go:530] duration metric: took 1m22.947828238s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner inspektor-gadget registry-creds volcano cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1206 09:12:47.314341  388517 start.go:247] waiting for cluster config update ...
	I1206 09:12:47.314373  388517 start.go:256] writing updated cluster config ...
	I1206 09:12:47.314678  388517 ssh_runner.go:195] Run: rm -f paused
	I1206 09:12:47.321984  388517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:47.325938  388517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.331363  388517 pod_ready.go:94] pod "coredns-66bc5c9577-l7sr8" is "Ready"
	I1206 09:12:47.331382  388517 pod_ready.go:86] duration metric: took 5.423953ms for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.333935  388517 pod_ready.go:83] waiting for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.339670  388517 pod_ready.go:94] pod "etcd-addons-269722" is "Ready"
	I1206 09:12:47.339686  388517 pod_ready.go:86] duration metric: took 5.735911ms for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.341852  388517 pod_ready.go:83] waiting for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.348825  388517 pod_ready.go:94] pod "kube-apiserver-addons-269722" is "Ready"
	I1206 09:12:47.348841  388517 pod_ready.go:86] duration metric: took 6.965989ms for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.351661  388517 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.728666  388517 pod_ready.go:94] pod "kube-controller-manager-addons-269722" is "Ready"
	I1206 09:12:47.728694  388517 pod_ready.go:86] duration metric: took 377.017246ms for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.928250  388517 pod_ready.go:83] waiting for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.326318  388517 pod_ready.go:94] pod "kube-proxy-c2km9" is "Ready"
	I1206 09:12:48.326347  388517 pod_ready.go:86] duration metric: took 398.070754ms for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.527945  388517 pod_ready.go:83] waiting for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925436  388517 pod_ready.go:94] pod "kube-scheduler-addons-269722" is "Ready"
	I1206 09:12:48.925477  388517 pod_ready.go:86] duration metric: took 397.504009ms for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925497  388517 pod_ready.go:40] duration metric: took 1.603486959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:48.968795  388517 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:12:48.970523  388517 out.go:179] * Done! kubectl is now configured to use "addons-269722" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	557d40ab6aa66       56cc512116c8f       8 minutes ago       Running             busybox                   0                   09f2b56f9baa0       busybox                                    default
	c4cccebac4fc4       97fe896f8c07b       15 minutes ago      Running             controller                0                   9ee054c3901ad       ingress-nginx-controller-6c8bf45fb-ndk8c   ingress-nginx
	864a2ecb4396f       884bd0ac01c8f       15 minutes ago      Exited              patch                     0                   3ddf53bb8795f       ingress-nginx-admission-patch-xpn6k        ingress-nginx
	c2e7e0b7588b1       884bd0ac01c8f       15 minutes ago      Exited              create                    0                   1ca23ac12776f       ingress-nginx-admission-create-kl75g       ingress-nginx
	2774623c95b6c       b6ab53fbfedaa       15 minutes ago      Running             minikube-ingress-dns      0                   a84f9f0b8a344       kube-ingress-dns-minikube                  kube-system
	d9e6d13d8e418       d5e667c0f2bb6       16 minutes ago      Running             amd-gpu-device-plugin     0                   479fca73c33e3       amd-gpu-device-plugin-4x5bp                kube-system
	a9394a7445ed6       6e38f40d628db       16 minutes ago      Running             storage-provisioner       0                   89b1f84c8945f       storage-provisioner                        kube-system
	e636e6172c8c9       52546a367cc9e       16 minutes ago      Running             coredns                   0                   18cf9f60905af       coredns-66bc5c9577-l7sr8                   kube-system
	d9ab1c94b0adc       8aa150647e88a       16 minutes ago      Running             kube-proxy                0                   7ce46fc8fe779       kube-proxy-c2km9                           kube-system
	f7319b640fed7       a3e246e9556e9       16 minutes ago      Running             etcd                      0                   5d2b5e40c2235       etcd-addons-269722                         kube-system
	31363d509c1e7       88320b5498ff2       16 minutes ago      Running             kube-scheduler            0                   f53f47f2f0dc9       kube-scheduler-addons-269722               kube-system
	c301895eb03e7       01e8bacf0f500       16 minutes ago      Running             kube-controller-manager   0                   afc5069ef7820       kube-controller-manager-addons-269722      kube-system
	95341ea890f7a       a5f569d49a979       16 minutes ago      Running             kube-apiserver            0                   fb1d3f9401a55       kube-apiserver-addons-269722               kube-system
	
	
	==> containerd <==
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.077496349Z" level=info msg="RemovePodSandbox \"485b2f551e86072d9503a03985e0ff31d6de57b61e7b02d001289370c680267b\" returns successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.078663671Z" level=info msg="StopPodSandbox for \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.107099119Z" level=info msg="TearDown network for sandbox \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\" successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.107138838Z" level=info msg="StopPodSandbox for \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\" returns successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.107586177Z" level=info msg="RemovePodSandbox for \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.107626126Z" level=info msg="Forcibly stopping sandbox \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.131870773Z" level=info msg="TearDown network for sandbox \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\" successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.136857242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.136925927Z" level=info msg="RemovePodSandbox \"49c8968cc1ce152454c6efeae6f89cf2fefffa31e9264c9fd54c0f97fe972d23\" returns successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.137739274Z" level=info msg="StopPodSandbox for \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.163424928Z" level=info msg="TearDown network for sandbox \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\" successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.163446575Z" level=info msg="StopPodSandbox for \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\" returns successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.163884963Z" level=info msg="RemovePodSandbox for \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.164208806Z" level=info msg="Forcibly stopping sandbox \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.189408975Z" level=info msg="TearDown network for sandbox \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\" successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.194148979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.194194879Z" level=info msg="RemovePodSandbox \"73074a1a93680dc63b1f2f65a8e18a69e95d634c96ed0d71563288539286945d\" returns successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.194757874Z" level=info msg="StopPodSandbox for \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.220505825Z" level=info msg="TearDown network for sandbox \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\" successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.220587837Z" level=info msg="StopPodSandbox for \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\" returns successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.221187163Z" level=info msg="RemovePodSandbox for \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.221331650Z" level=info msg="Forcibly stopping sandbox \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\""
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.246461615Z" level=info msg="TearDown network for sandbox \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\" successfully"
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.251515334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Dec 06 09:26:22 addons-269722 containerd[831]: time="2025-12-06T09:26:22.251647878Z" level=info msg="RemovePodSandbox \"a312cf43898ad89ca9357660d3cb73736fa3874319c726cb7fe87e01b76ff5f5\" returns successfully"
	
	
	==> coredns [e636e6172c8c93ebe7783047ae4449227f6f37f80a082ff4fd383ebc5d08fdbe] <==
	[INFO] 10.244.0.8:51474 - 59613 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000209051s
	[INFO] 10.244.0.8:51474 - 58064 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00013173s
	[INFO] 10.244.0.8:51474 - 29072 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084614s
	[INFO] 10.244.0.8:51474 - 28407 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124845s
	[INFO] 10.244.0.8:51474 - 5185 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106747s
	[INFO] 10.244.0.8:51474 - 28903 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097914s
	[INFO] 10.244.0.8:51474 - 44135 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000086701s
	[INFO] 10.244.0.8:42198 - 56025 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124465s
	[INFO] 10.244.0.8:42198 - 58448 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118323s
	[INFO] 10.244.0.8:40240 - 52465 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104193s
	[INFO] 10.244.0.8:40240 - 52746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113004s
	[INFO] 10.244.0.8:49362 - 65347 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126485s
	[INFO] 10.244.0.8:49362 - 110 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216341s
	[INFO] 10.244.0.8:51040 - 59068 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087119s
	[INFO] 10.244.0.8:51040 - 59346 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118565s
	[INFO] 10.244.0.27:48228 - 49165 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000319642s
	[INFO] 10.244.0.27:40396 - 12915 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001198011s
	[INFO] 10.244.0.27:39038 - 53409 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158695s
	[INFO] 10.244.0.27:59026 - 7807 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134321s
	[INFO] 10.244.0.27:32836 - 36351 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085705s
	[INFO] 10.244.0.27:33578 - 24448 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114082s
	[INFO] 10.244.0.27:49566 - 16674 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003361826s
	[INFO] 10.244.0.27:37372 - 21961 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004334216s
	[INFO] 10.244.0.31:37715 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000570157s
	[INFO] 10.244.0.31:57352 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162981s
	
	
	==> describe nodes <==
	Name:               addons-269722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-269722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-269722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-269722
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-269722
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:27:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:26:49 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:26:49 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:26:49 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:26:49 +0000   Sat, 06 Dec 2025 09:11:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    addons-269722
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 faaa974faf9d46f8a3b502afcdf78e43
	  System UUID:                faaa974f-af9d-46f8-a3b5-02afcdf78e43
	  Boot ID:                    33004088-aa48-42d5-ac29-91fbfe5a6c68
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-ndk8c    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         16m
	  kube-system                 amd-gpu-device-plugin-4x5bp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-l7sr8                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-addons-269722                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-269722                250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-269722       200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-c2km9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-269722                100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m                kubelet          Node addons-269722 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node addons-269722 event: Registered Node addons-269722 in Controller
	
	
	==> dmesg <==
	[  +5.920301] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 09:12] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.529426] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.897097] kauditd_printk_skb: 166 callbacks suppressed
	[  +2.318976] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.568626] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.319087] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.000694] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 53 callbacks suppressed
	[Dec 6 09:18] kauditd_printk_skb: 47 callbacks suppressed
	[ +48.658661] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 6 09:19] kauditd_printk_skb: 67 callbacks suppressed
	[ +10.881930] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000283] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.748225] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 37 callbacks suppressed
	[  +3.911192] kauditd_printk_skb: 177 callbacks suppressed
	[  +1.378691] kauditd_printk_skb: 126 callbacks suppressed
	[Dec 6 09:21] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.000310] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 6 09:23] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 6 09:24] kauditd_printk_skb: 9 callbacks suppressed
	[ +20.422822] kauditd_printk_skb: 26 callbacks suppressed
	[ +24.965389] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 6 09:25] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [f7319b640fed7119b3d158c30e3bc2dd128fc0442cd17b3131fd715d76a44c9a] <==
	{"level":"info","ts":"2025-12-06T09:12:11.568524Z","caller":"traceutil/trace.go:172","msg":"trace[574553895] linearizableReadLoop","detail":"{readStateIndex:1209; appliedIndex:1209; }","duration":"261.730098ms","start":"2025-12-06T09:12:11.306778Z","end":"2025-12-06T09:12:11.568508Z","steps":["trace[574553895] 'read index received'  (duration: 261.726617ms)","trace[574553895] 'applied index is now lower than readState.Index'  (duration: 3.06µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.826038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.650735Z","caller":"traceutil/trace.go:172","msg":"trace[1035894961] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1184; }","duration":"343.946028ms","start":"2025-12-06T09:12:11.306774Z","end":"2025-12-06T09:12:11.650720Z","steps":["trace[1035894961] 'agreement among raft nodes before linearized reading'  (duration: 261.814135ms)","trace[1035894961] 'range keys from in-memory index tree'  (duration: 81.970543ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650785Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.306763Z","time spent":"344.009881ms","remote":"127.0.0.1:53040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:12:11.651140Z","caller":"traceutil/trace.go:172","msg":"trace[483765702] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"350.392753ms","start":"2025-12-06T09:12:11.300733Z","end":"2025-12-06T09:12:11.651125Z","steps":["trace[483765702] 'process raft request'  (duration: 267.904896ms)","trace[483765702] 'compare'  (duration: 81.445642ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.651205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.300717Z","time spent":"350.449818ms","remote":"127.0.0.1:53164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<>"}
	{"level":"warn","ts":"2025-12-06T09:12:11.651419Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.416676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651477Z","caller":"traceutil/trace.go:172","msg":"trace[167194031] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"172.473278ms","start":"2025-12-06T09:12:11.478992Z","end":"2025-12-06T09:12:11.651465Z","steps":["trace[167194031] 'agreement among raft nodes before linearized reading'  (duration: 172.38943ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.385049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651660Z","caller":"traceutil/trace.go:172","msg":"trace[1143122093] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"270.440925ms","start":"2025-12-06T09:12:11.381211Z","end":"2025-12-06T09:12:11.651652Z","steps":["trace[1143122093] 'agreement among raft nodes before linearized reading'  (duration: 270.367937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651812Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.784519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651836Z","caller":"traceutil/trace.go:172","msg":"trace[535987253] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1185; }","duration":"298.810243ms","start":"2025-12-06T09:12:11.353018Z","end":"2025-12-06T09:12:11.651829Z","steps":["trace[535987253] 'agreement among raft nodes before linearized reading'  (duration: 298.76303ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:20.929795Z","caller":"traceutil/trace.go:172","msg":"trace[628627548] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"105.667962ms","start":"2025-12-06T09:12:20.824110Z","end":"2025-12-06T09:12:20.929778Z","steps":["trace[628627548] 'process raft request'  (duration: 105.596429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:23.778852Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.603155ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:23.779380Z","caller":"traceutil/trace.go:172","msg":"trace[424992269] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1281; }","duration":"218.131424ms","start":"2025-12-06T09:12:23.561231Z","end":"2025-12-06T09:12:23.779363Z","steps":["trace[424992269] 'range keys from in-memory index tree'  (duration: 217.594054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:26.494846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.642654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:26.495325Z","caller":"traceutil/trace.go:172","msg":"trace[102060551] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1290; }","duration":"113.581468ms","start":"2025-12-06T09:12:26.381729Z","end":"2025-12-06T09:12:26.495310Z","steps":["trace[102060551] 'range keys from in-memory index tree'  (duration: 112.580581ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:13:20.713154Z","caller":"traceutil/trace.go:172","msg":"trace[1259088558] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"103.020152ms","start":"2025-12-06T09:13:20.609588Z","end":"2025-12-06T09:13:20.712608Z","steps":["trace[1259088558] 'process raft request'  (duration: 102.875042ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:18:37.035751Z","caller":"traceutil/trace.go:172","msg":"trace[10856222] transaction","detail":"{read_only:false; response_revision:2013; number_of_response:1; }","duration":"171.241207ms","start":"2025-12-06T09:18:36.864442Z","end":"2025-12-06T09:18:37.035683Z","steps":["trace[10856222] 'process raft request'  (duration: 170.245197ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:21:15.400819Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1792}
	{"level":"info","ts":"2025-12-06T09:21:15.571936Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1792,"took":"167.85031ms","hash":732329829,"current-db-size-bytes":10612736,"current-db-size":"11 MB","current-db-size-in-use-bytes":7192576,"current-db-size-in-use":"7.2 MB"}
	{"level":"info","ts":"2025-12-06T09:21:15.572113Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":732329829,"revision":1792,"compact-revision":-1}
	{"level":"info","ts":"2025-12-06T09:26:15.411468Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2777}
	{"level":"info","ts":"2025-12-06T09:26:15.446017Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2777,"took":"33.344329ms","hash":961771273,"current-db-size-bytes":10612736,"current-db-size":"11 MB","current-db-size-in-use-bytes":5419008,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2025-12-06T09:26:15.446070Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":961771273,"revision":2777,"compact-revision":1792}
	
	
	==> kernel <==
	 09:27:44 up 16 min,  0 users,  load average: 0.13, 0.45, 0.54
	Linux addons-269722 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [95341ea890f7aa882f4bc2a6906002451241d8c5faa071707f5de92b27e20ce7] <==
	W1206 09:18:54.331045       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1206 09:18:54.378594       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1206 09:18:55.332669       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1206 09:18:55.731925       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1206 09:19:11.494382       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:53920: use of closed network connection
	E1206 09:19:11.675647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:53948: use of closed network connection
	I1206 09:19:20.977935       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.33.16"}
	E1206 09:19:32.614060       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:59722: use of closed network connection
	I1206 09:19:42.464058       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 09:19:42.639393       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.3.234"}
	I1206 09:19:58.097224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1206 09:21:17.002283       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:25:43.789333       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:25:43.789403       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:25:43.826741       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:25:43.827235       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:25:43.831158       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:25:43.831286       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:25:43.858828       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:25:43.858887       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:25:43.884875       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:25:43.885179       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1206 09:25:44.831635       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 09:25:44.885164       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1206 09:25:45.007141       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c301895eb03e76a7f98c21fd67491f3e3114e008ac0bc660fb3871dde69fdff8] <==
	E1206 09:27:04.918186       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:04.919499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:06.793171       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:06.794780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:09.045167       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:27:12.952470       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:12.953606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:22.002699       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:22.004667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:24.045888       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:27:25.451736       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:25.453040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:29.730635       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:29.731926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:31.414178       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:31.415394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:38.026478       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:38.027548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:39.047106       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:27:40.941415       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:40.942609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:41.686971       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:41.688633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:27:41.969969       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:27:41.971156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [d9ab1c94b0adcd19eace1b7a10c0f065d7c953fc676839d82393eaab4f0c1819] <==
	I1206 09:11:27.430778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:11:27.531232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:11:27.531444       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.220"]
	E1206 09:11:27.531895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:11:27.678473       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:11:27.678923       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:11:27.679749       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:11:27.716021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:11:27.719059       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:11:27.719117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:27.726703       1 config.go:200] "Starting service config controller"
	I1206 09:11:27.726733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:11:27.726750       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:11:27.726754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:11:27.730726       1 config.go:309] "Starting node config controller"
	I1206 09:11:27.730967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:11:27.730985       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:11:27.726764       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:11:27.736817       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:11:27.827489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:11:27.827527       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:11:27.837415       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [31363d509c1e784ea3123303af98a26bde6cf40b74abff49509bf33b99ca8f00] <==
	E1206 09:11:17.083720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:11:17.083797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:17.083954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:17.085026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:17.085610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:11:17.085977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:17.086442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:17.086495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:17.086552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:11:17.086667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:17.086930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:11:17.939163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:11:17.952354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:11:17.975596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:11:18.009464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:18.049043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:18.084056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:18.094385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:18.198477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:18.257306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:18.287686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:11:18.314012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:18.315115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:11:18.580055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:11:21.477327       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:25:49 addons-269722 kubelet[1529]: E1206 09:25:49.923160    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:25:52 addons-269722 kubelet[1529]: E1206 09:25:52.921888    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:26:02 addons-269722 kubelet[1529]: E1206 09:26:02.923711    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:26:07 addons-269722 kubelet[1529]: W1206 09:26:07.186587    1529 logging.go:55] [core] [Channel #68 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Dec 06 09:26:07 addons-269722 kubelet[1529]: E1206 09:26:07.922576    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:26:13 addons-269722 kubelet[1529]: E1206 09:26:13.922563    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:26:18 addons-269722 kubelet[1529]: I1206 09:26:18.922073    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4x5bp" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:26:18 addons-269722 kubelet[1529]: E1206 09:26:18.922798    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:26:24 addons-269722 kubelet[1529]: I1206 09:26:24.921575    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l7sr8" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:26:26 addons-269722 kubelet[1529]: E1206 09:26:26.922735    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:26:30 addons-269722 kubelet[1529]: E1206 09:26:30.921783    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:26:41 addons-269722 kubelet[1529]: E1206 09:26:41.923665    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:26:44 addons-269722 kubelet[1529]: E1206 09:26:44.922579    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:26:52 addons-269722 kubelet[1529]: E1206 09:26:52.923104    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:26:59 addons-269722 kubelet[1529]: E1206 09:26:59.922078    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:27:00 addons-269722 kubelet[1529]: I1206 09:27:00.921613    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:27:07 addons-269722 kubelet[1529]: E1206 09:27:07.925152    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:27:14 addons-269722 kubelet[1529]: E1206 09:27:14.922493    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:27:19 addons-269722 kubelet[1529]: E1206 09:27:19.924329    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:27:25 addons-269722 kubelet[1529]: E1206 09:27:25.921510    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:27:26 addons-269722 kubelet[1529]: I1206 09:27:26.922373    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4x5bp" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:27:30 addons-269722 kubelet[1529]: E1206 09:27:30.922714    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:27:36 addons-269722 kubelet[1529]: W1206 09:27:36.661327    1529 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Dec 06 09:27:39 addons-269722 kubelet[1529]: E1206 09:27:39.922477    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:27:42 addons-269722 kubelet[1529]: E1206 09:27:42.924198    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	
	
	==> storage-provisioner [a9394a7445ed60a376c7cd3e75aaac67b588412df8710faeea1ea9b282a9b119] <==
	W1206 09:27:18.191292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:20.197182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:20.202434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:22.205626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:22.212678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:24.215138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:24.221883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:26.225325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:26.230558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:28.235427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:28.242321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:30.246091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:30.251386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:32.254409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:32.260775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:34.264863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:34.272706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:36.275607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:36.281153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:38.286145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:38.295446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:40.298863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:40.306388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:42.310535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:42.317045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-269722 -n addons-269722
helpers_test.go:269: (dbg) Run:  kubectl --context addons-269722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k: exit status 1 (81.996042ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-269722/192.168.39.220
	Start Time:       Sat, 06 Dec 2025 09:19:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tppjg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tppjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m2s                  default-scheduler  Successfully assigned default/nginx to addons-269722
	  Normal   Pulling    4m59s (x5 over 8m1s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m58s (x5 over 8m)    kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m58s (x5 over 8m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    2m52s (x21 over 8m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m52s (x21 over 8m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-269722/192.168.39.220
	Start Time:       Sat, 06 Dec 2025 09:19:41 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8jd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-sn8jd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-269722
	  Warning  Failed     7m46s                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m15s (x4 over 8m2s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m15s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m49s (x20 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m49s (x20 over 8m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m34s (x6 over 8m3s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z99d9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-z99d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kl75g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpn6k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable ingress-dns --alsologtostderr -v=1: (1.548005543s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable ingress --alsologtostderr -v=1: (7.666316605s)
--- FAIL: TestAddons/parallel/Ingress (491.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (373.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 09:19:37.990946  387687 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 09:19:37.997983  387687 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 09:19:37.998011  387687 kapi.go:107] duration metric: took 7.092001ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.102538ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-269722 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-269722 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [955ebad4-b055-4cbf-95e3-243af9483d37] Pending
helpers_test.go:352: "task-pv-pod" [955ebad4-b055-4cbf-95e3-243af9483d37] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-269722 -n addons-269722
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-12-06 09:25:41.522571551 +0000 UTC m=+914.809948682
addons_test.go:567: (dbg) Run:  kubectl --context addons-269722 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-269722 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-269722/192.168.39.220
Start Time:       Sat, 06 Dec 2025 09:19:41 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8jd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-sn8jd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-269722
Warning  Failed     5m43s                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m12s (x4 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m12s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    46s (x20 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     46s (x20 over 5m58s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    31s (x6 over 6m)       kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-269722 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-269722 logs task-pv-pod -n default: exit status 1 (70.620103ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-269722 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-269722 -n addons-269722
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 logs -n 25: (1.048633872s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                     ARGS                                                                                                                                                                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-802744 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                   │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-600827                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-345944                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ --download-only -p binary-mirror-098159 --alsologtostderr --binary-mirror http://127.0.0.1:43773 --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ -p binary-mirror-098159                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ addons  │ disable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ start   │ -p addons-269722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:12 UTC │
	│ addons  │ addons-269722 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:18 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ enable headlamp -p addons-269722 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ ip      │ addons-269722 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                               │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                              │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:41.905948  388517 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:41.906056  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906068  388517 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:41.906073  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906290  388517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:10:41.906764  388517 out.go:368] Setting JSON to false
	I1206 09:10:41.907751  388517 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6792,"bootTime":1765005450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:41.907809  388517 start.go:143] virtualization: kvm guest
	I1206 09:10:41.909713  388517 out.go:179] * [addons-269722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:41.911209  388517 notify.go:221] Checking for updates...
	I1206 09:10:41.911229  388517 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:10:41.912645  388517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:41.913886  388517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:10:41.915020  388517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:41.919365  388517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:10:41.920580  388517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:10:41.921823  388517 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:41.950647  388517 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 09:10:41.951784  388517 start.go:309] selected driver: kvm2
	I1206 09:10:41.951797  388517 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:10:41.951808  388517 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:10:41.952432  388517 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:10:41.952640  388517 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:41.952666  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:10:41.952706  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:10:41.952714  388517 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:10:41.952753  388517 start.go:353] cluster config:
	{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:41.952877  388517 iso.go:125] acquiring lock: {Name:mk1a7d442a240aa1785a2e6e751e007c5a8723f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:41.954741  388517 out.go:179] * Starting "addons-269722" primary control-plane node in "addons-269722" cluster
	I1206 09:10:41.955614  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:10:41.955638  388517 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1206 09:10:41.955646  388517 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:41.955737  388517 preload.go:238] Found /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1206 09:10:41.955748  388517 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 09:10:41.956043  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:10:41.956066  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json: {Name:mka83bdbdc23544e613eb52d015ad5fe63a1e910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:41.956183  388517 start.go:360] acquireMachinesLock for addons-269722: {Name:mkc77d1cf752e1546ce7850a29dbe975ae7fa9b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:10:41.956225  388517 start.go:364] duration metric: took 30.995µs to acquireMachinesLock for "addons-269722"
	I1206 09:10:41.956247  388517 start.go:93] Provisioning new machine with config: &{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:10:41.956289  388517 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 09:10:41.957646  388517 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1206 09:10:41.957797  388517 start.go:159] libmachine.API.Create for "addons-269722" (driver="kvm2")
	I1206 09:10:41.957831  388517 client.go:173] LocalClient.Create starting
	I1206 09:10:41.957926  388517 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem
	I1206 09:10:41.993468  388517 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem
	I1206 09:10:42.078767  388517 main.go:143] libmachine: creating domain...
	I1206 09:10:42.078784  388517 main.go:143] libmachine: creating network...
	I1206 09:10:42.080023  388517 main.go:143] libmachine: found existing default network
	I1206 09:10:42.080210  388517 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.080787  388517 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d56770}
	I1206 09:10:42.080910  388517 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-269722</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.086592  388517 main.go:143] libmachine: creating private network mk-addons-269722 192.168.39.0/24...
	I1206 09:10:42.152917  388517 main.go:143] libmachine: private network mk-addons-269722 192.168.39.0/24 created
	I1206 09:10:42.153176  388517 main.go:143] libmachine: <network>
	  <name>mk-addons-269722</name>
	  <uuid>2336c74c-93b2-42b0-890b-3a8a8a25a922</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:fd:c9:1f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.153203  388517 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.153230  388517 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 09:10:42.153244  388517 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.153313  388517 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-383742/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 09:10:42.415061  388517 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa...
	I1206 09:10:42.429309  388517 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk...
	I1206 09:10:42.429369  388517 main.go:143] libmachine: Writing magic tar header
	I1206 09:10:42.429404  388517 main.go:143] libmachine: Writing SSH key tar header
	I1206 09:10:42.429498  388517 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.429571  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722
	I1206 09:10:42.429604  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 (perms=drwx------)
	I1206 09:10:42.429623  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines
	I1206 09:10:42.429636  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines (perms=drwxr-xr-x)
	I1206 09:10:42.429647  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.429656  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube (perms=drwxr-xr-x)
	I1206 09:10:42.429674  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742
	I1206 09:10:42.429704  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742 (perms=drwxrwxr-x)
	I1206 09:10:42.429722  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 09:10:42.429744  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 09:10:42.429758  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 09:10:42.429765  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 09:10:42.429775  388517 main.go:143] libmachine: checking permissions on dir: /home
	I1206 09:10:42.429781  388517 main.go:143] libmachine: skipping /home - not owner
	I1206 09:10:42.429788  388517 main.go:143] libmachine: defining domain...
	I1206 09:10:42.431063  388517 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:42.438342  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:8d:9c:cf in network default
	I1206 09:10:42.438932  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:42.438948  388517 main.go:143] libmachine: starting domain...
	I1206 09:10:42.438952  388517 main.go:143] libmachine: ensuring networks are active...
	I1206 09:10:42.439580  388517 main.go:143] libmachine: Ensuring network default is active
	I1206 09:10:42.439915  388517 main.go:143] libmachine: Ensuring network mk-addons-269722 is active
	I1206 09:10:42.440425  388517 main.go:143] libmachine: getting domain XML...
	I1206 09:10:42.441355  388517 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <uuid>faaa974f-af9d-46f8-a3b5-02afcdf78e43</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f2:80:b2'/>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:8d:9c:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:43.781082  388517 main.go:143] libmachine: waiting for domain to start...
	I1206 09:10:43.782318  388517 main.go:143] libmachine: domain is now running
	I1206 09:10:43.782338  388517 main.go:143] libmachine: waiting for IP...
	I1206 09:10:43.783021  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:43.783369  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:43.783385  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:43.783643  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:43.783696  388517 retry.go:31] will retry after 278.987444ms: waiting for domain to come up
	I1206 09:10:44.064124  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.064595  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.064606  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.064919  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.064957  388517 retry.go:31] will retry after 330.689041ms: waiting for domain to come up
	I1206 09:10:44.397460  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.397947  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.397962  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.398238  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.398277  388517 retry.go:31] will retry after 413.406233ms: waiting for domain to come up
	I1206 09:10:44.812999  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.813581  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.813601  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.813924  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.813970  388517 retry.go:31] will retry after 440.754763ms: waiting for domain to come up
	I1206 09:10:45.256730  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.257210  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.257228  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.257514  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.257556  388517 retry.go:31] will retry after 717.110818ms: waiting for domain to come up
	I1206 09:10:45.975902  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.976408  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.976424  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.976689  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.976722  388517 retry.go:31] will retry after 589.246662ms: waiting for domain to come up
	I1206 09:10:46.567419  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:46.567953  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:46.567973  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:46.568280  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:46.568326  388517 retry.go:31] will retry after 857.836192ms: waiting for domain to come up
	I1206 09:10:47.427627  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:47.428082  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:47.428097  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:47.428421  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:47.428475  388517 retry.go:31] will retry after 969.137484ms: waiting for domain to come up
	I1206 09:10:48.399647  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:48.400199  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:48.400215  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:48.400562  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:48.400615  388517 retry.go:31] will retry after 1.740343977s: waiting for domain to come up
	I1206 09:10:50.143512  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:50.143999  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:50.144014  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:50.144329  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:50.144363  388517 retry.go:31] will retry after 2.180103707s: waiting for domain to come up
	I1206 09:10:52.325956  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:52.326470  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:52.326485  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:52.326823  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:52.326870  388517 retry.go:31] will retry after 2.821995124s: waiting for domain to come up
	I1206 09:10:55.151850  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:55.152380  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:55.152397  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:55.152818  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:55.152881  388517 retry.go:31] will retry after 2.278330426s: waiting for domain to come up
	I1206 09:10:57.432300  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:57.432813  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:57.432829  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:57.433107  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:57.433144  388517 retry.go:31] will retry after 3.558016636s: waiting for domain to come up
	I1206 09:11:00.994805  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995368  388517 main.go:143] libmachine: domain addons-269722 has current primary IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995386  388517 main.go:143] libmachine: found domain IP: 192.168.39.220
	I1206 09:11:00.995394  388517 main.go:143] libmachine: reserving static IP address...
	I1206 09:11:00.995774  388517 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-269722", mac: "52:54:00:f2:80:b2", ip: "192.168.39.220"} in network mk-addons-269722
	I1206 09:11:01.169742  388517 main.go:143] libmachine: reserved static IP address 192.168.39.220 for domain addons-269722
	I1206 09:11:01.169781  388517 main.go:143] libmachine: waiting for SSH...
	I1206 09:11:01.169788  388517 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 09:11:01.172807  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173481  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.173514  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173694  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.173964  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.173979  388517 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 09:11:01.272210  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.272513  388517 main.go:143] libmachine: domain creation complete
	I1206 09:11:01.273828  388517 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:01.275801  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276155  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.276181  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276321  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.276511  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.276520  388517 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:01.373100  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 09:11:01.373130  388517 buildroot.go:166] provisioning hostname "addons-269722"
	I1206 09:11:01.375944  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376345  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.376372  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376608  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.376841  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.376854  388517 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-269722 && echo "addons-269722" | sudo tee /etc/hostname
	I1206 09:11:01.490874  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-269722
	
	I1206 09:11:01.493600  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.493995  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.494015  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.494204  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.494457  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.494481  388517 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-269722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-269722/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-269722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:01.601899  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.601925  388517 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-383742/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-383742/.minikube}
	I1206 09:11:01.601941  388517 buildroot.go:174] setting up certificates
	I1206 09:11:01.601950  388517 provision.go:84] configureAuth start
	I1206 09:11:01.604648  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.605083  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.605108  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607340  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607665  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.607684  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607799  388517 provision.go:143] copyHostCerts
	I1206 09:11:01.607857  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/ca.pem (1082 bytes)
	I1206 09:11:01.608028  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/cert.pem (1123 bytes)
	I1206 09:11:01.608130  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/key.pem (1675 bytes)
	I1206 09:11:01.608197  388517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem org=jenkins.addons-269722 san=[127.0.0.1 192.168.39.220 addons-269722 localhost minikube]
	I1206 09:11:01.761887  388517 provision.go:177] copyRemoteCerts
	I1206 09:11:01.761947  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:01.764212  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764543  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.764581  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764716  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:01.844794  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:11:01.873452  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:01.901904  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:11:01.930285  388517 provision.go:87] duration metric: took 328.321351ms to configureAuth
	I1206 09:11:01.930311  388517 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:11:01.930501  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:01.930521  388517 machine.go:97] duration metric: took 656.676665ms to provisionDockerMachine
	I1206 09:11:01.930531  388517 client.go:176] duration metric: took 19.972691553s to LocalClient.Create
	I1206 09:11:01.930551  388517 start.go:167] duration metric: took 19.97275355s to libmachine.API.Create "addons-269722"
	I1206 09:11:01.930596  388517 start.go:293] postStartSetup for "addons-269722" (driver="kvm2")
	I1206 09:11:01.930611  388517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:01.930658  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:01.933229  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933604  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.933625  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933768  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.013069  388517 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:02.017563  388517 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:11:02.017583  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/addons for local assets ...
	I1206 09:11:02.017651  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/files for local assets ...
	I1206 09:11:02.017684  388517 start.go:296] duration metric: took 87.076069ms for postStartSetup
	I1206 09:11:02.020584  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.020944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.020967  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.021198  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:11:02.021364  388517 start.go:128] duration metric: took 20.065065791s to createHost
	I1206 09:11:02.023485  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023794  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.023813  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023959  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:02.024173  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:02.024185  388517 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:11:02.121919  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765012262.085933657
	
	I1206 09:11:02.121936  388517 fix.go:216] guest clock: 1765012262.085933657
	I1206 09:11:02.121942  388517 fix.go:229] Guest: 2025-12-06 09:11:02.085933657 +0000 UTC Remote: 2025-12-06 09:11:02.021381724 +0000 UTC m=+20.161953678 (delta=64.551933ms)
	I1206 09:11:02.121960  388517 fix.go:200] guest clock delta is within tolerance: 64.551933ms
	I1206 09:11:02.121974  388517 start.go:83] releasing machines lock for "addons-269722", held for 20.165731842s
	I1206 09:11:02.124594  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.124944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.124973  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.125474  388517 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:02.125592  388517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:02.128433  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128746  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.128763  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128921  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.128989  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129445  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.129480  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129624  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.204247  388517 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:02.228305  388517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:02.234563  388517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:02.234633  388517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:02.260428  388517 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:02.260454  388517 start.go:496] detecting cgroup driver to use...
	I1206 09:11:02.260528  388517 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:11:02.297166  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:11:02.315488  388517 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:02.315555  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:02.332111  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:02.347076  388517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:02.491701  388517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:02.703514  388517 docker.go:234] disabling docker service ...
	I1206 09:11:02.703604  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:02.719452  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:02.733466  388517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:02.882667  388517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:03.020738  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:03.036166  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:03.057682  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:11:03.069874  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:11:03.081945  388517 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:11:03.082022  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:11:03.094105  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.106250  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:11:03.117968  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.130001  388517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:03.142658  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:11:03.154729  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:11:03.166983  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:11:03.178658  388517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:03.188759  388517 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 09:11:03.188803  388517 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 09:11:03.211314  388517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:03.224103  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:03.361032  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:03.404281  388517 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:11:03.404385  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:03.409523  388517 retry.go:31] will retry after 1.49666292s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1206 09:11:04.906469  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:04.912677  388517 start.go:564] Will wait 60s for crictl version
	I1206 09:11:04.912759  388517 ssh_runner.go:195] Run: which crictl
	I1206 09:11:04.916909  388517 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:11:04.952021  388517 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1206 09:11:04.952114  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:04.979176  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:05.046042  388517 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 1.7.23 ...
	I1206 09:11:05.113332  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113713  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:05.113733  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113904  388517 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:05.118728  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:05.134279  388517 kubeadm.go:884] updating cluster {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:05.134389  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:11:05.134436  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:05.163245  388517 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 09:11:05.163338  388517 ssh_runner.go:195] Run: which lz4
	I1206 09:11:05.167791  388517 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 09:11:05.172645  388517 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 09:11:05.172675  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (339763354 bytes)
	I1206 09:11:06.408453  388517 containerd.go:563] duration metric: took 1.240701247s to copy over tarball
	I1206 09:11:06.408534  388517 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 09:11:07.824785  388517 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.41620911s)
	I1206 09:11:07.824829  388517 containerd.go:570] duration metric: took 1.416348198s to extract the tarball
	I1206 09:11:07.824837  388517 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 09:11:07.876750  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.019449  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:08.055912  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.089979  388517 retry.go:31] will retry after 204.800226ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:08Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1206 09:11:08.295519  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.332986  388517 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:11:08.333019  388517 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:08.333035  388517 kubeadm.go:935] updating node { 192.168.39.220 8443 v1.34.2 containerd true true} ...
	I1206 09:11:08.333199  388517 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-269722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:11:08.333263  388517 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:11:08.363626  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:08.363652  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:08.363671  388517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:08.363694  388517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-269722 NodeName:addons-269722 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:08.363802  388517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-269722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:08.363898  388517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:08.376320  388517 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:08.376400  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:08.387974  388517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1206 09:11:08.408073  388517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:08.428105  388517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1206 09:11:08.448237  388517 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:08.452207  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:08.466654  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.612134  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:08.650190  388517 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722 for IP: 192.168.39.220
	I1206 09:11:08.650221  388517 certs.go:195] generating shared ca certs ...
	I1206 09:11:08.650248  388517 certs.go:227] acquiring lock for ca certs: {Name:mkf308ce4033be42aa40d533f6774edcee747959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.650426  388517 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key
	I1206 09:11:08.753472  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt ...
	I1206 09:11:08.753502  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt: {Name:mk0bc547e2c4a3698a714e2e67e37fe0843ac532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753663  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key ...
	I1206 09:11:08.753675  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key: {Name:mk257636778cdf81faeb62cfd641c994d65ea561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753763  388517 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key
	I1206 09:11:08.944161  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt ...
	I1206 09:11:08.944193  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt: {Name:mk7a27f62c25f1293f691b851f1b366a8491b851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944357  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key ...
	I1206 09:11:08.944369  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key: {Name:mk0dbe369ea38e824cffd9d96349344507b04d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944442  388517 certs.go:257] generating profile certs ...
	I1206 09:11:08.944507  388517 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key
	I1206 09:11:08.944522  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt with IP's: []
	I1206 09:11:09.004417  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt ...
	I1206 09:11:09.004443  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: {Name:mkc7ee580529997a0158c489e5de6aaaab4381ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004577  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key ...
	I1206 09:11:09.004587  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key: {Name:mk6aea14e5a790daaff4a5aa584541cbd36fa7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004653  388517 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9
	I1206 09:11:09.004671  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I1206 09:11:09.103453  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 ...
	I1206 09:11:09.103485  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9: {Name:mkb69edd53ea15cc714b2e6dcd35fb9bda8e0a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103642  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 ...
	I1206 09:11:09.103658  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9: {Name:mkbef642e3d05cf341f2d82d3597bab753cd2174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103728  388517 certs.go:382] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt
	I1206 09:11:09.103816  388517 certs.go:386] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key
	I1206 09:11:09.103876  388517 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key
	I1206 09:11:09.103896  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt with IP's: []
	I1206 09:11:09.195473  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt ...
	I1206 09:11:09.195504  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt: {Name:mk1ed5a652995aaac584bd788ffca22c7d7d4179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195645  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key ...
	I1206 09:11:09.195657  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key: {Name:mkb0905602ecfb2d53502a566a95204a8f98bd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195846  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 09:11:09.195899  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:09.195942  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:09.195967  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:09.196610  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:09.227924  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:09.257244  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:09.287169  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:11:09.319682  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:11:09.354785  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:11:09.391203  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:09.419761  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:11:09.448250  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:09.476343  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:09.495953  388517 ssh_runner.go:195] Run: openssl version
	I1206 09:11:09.502134  388517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.512996  388517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:09.524111  388517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529273  388517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529325  388517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.536780  388517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:09.547642  388517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:09.558961  388517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:09.563664  388517 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:09.563723  388517 kubeadm.go:401] StartCluster: {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:09.563812  388517 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:09.563854  388517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:09.597231  388517 cri.go:89] found id: ""
	I1206 09:11:09.597295  388517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:09.609197  388517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:09.619916  388517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:09.631012  388517 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:09.631028  388517 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:09.631067  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:09.641398  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:09.641442  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:09.652328  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:09.662630  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:09.662683  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:09.673582  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.683944  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:09.683997  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.694924  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:09.705284  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:09.705332  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:09.716270  388517 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 09:11:09.765023  388517 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:09.765245  388517 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:09.858054  388517 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:09.858229  388517 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:09.858396  388517 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:09.865139  388517 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:09.920280  388517 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:09.920378  388517 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:09.920462  388517 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:10.105985  388517 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:10.865814  388517 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:10.897033  388517 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:11.249180  388517 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:11.405265  388517 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:11.405459  388517 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.595783  388517 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:11.595930  388517 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.685113  388517 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:11.795320  388517 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:12.056322  388517 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:12.057602  388517 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:12.245522  388517 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:12.344100  388517 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:12.481696  388517 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:12.805057  388517 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:12.987909  388517 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:12.988354  388517 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:12.990637  388517 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:12.992591  388517 out.go:252]   - Booting up control plane ...
	I1206 09:11:12.992683  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:12.992757  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:12.992829  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:13.009376  388517 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:13.009528  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:13.016083  388517 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:13.016157  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:13.016213  388517 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:13.195314  388517 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:13.195457  388517 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:13.696155  388517 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.400144ms
	I1206 09:11:13.701317  388517 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:13.701412  388517 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.220:8443/livez
	I1206 09:11:13.701516  388517 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:13.701609  388517 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:15.925448  388517 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.2258309s
	I1206 09:11:17.097937  388517 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.399298925s
	I1206 09:11:19.199961  388517 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502821586s
	I1206 09:11:19.217728  388517 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:19.231172  388517 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:19.244842  388517 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:19.245047  388517 kubeadm.go:319] [mark-control-plane] Marking the node addons-269722 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:19.255597  388517 kubeadm.go:319] [bootstrap-token] Using token: tnc6di.0o5js773tkjcekar
	I1206 09:11:19.256827  388517 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:19.256963  388517 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:19.261388  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:19.269766  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:19.273599  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:19.281952  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:19.288853  388517 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:19.605592  388517 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:20.070227  388517 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:20.605934  388517 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:20.606844  388517 kubeadm.go:319] 
	I1206 09:11:20.606929  388517 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:20.606938  388517 kubeadm.go:319] 
	I1206 09:11:20.607026  388517 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:20.607033  388517 kubeadm.go:319] 
	I1206 09:11:20.607064  388517 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:20.607146  388517 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:20.607224  388517 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:20.607234  388517 kubeadm.go:319] 
	I1206 09:11:20.607327  388517 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:20.607350  388517 kubeadm.go:319] 
	I1206 09:11:20.607426  388517 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:20.607434  388517 kubeadm.go:319] 
	I1206 09:11:20.607510  388517 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:20.607639  388517 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:20.607758  388517 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:20.607774  388517 kubeadm.go:319] 
	I1206 09:11:20.607894  388517 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:20.607992  388517 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:20.608007  388517 kubeadm.go:319] 
	I1206 09:11:20.608129  388517 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608283  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 \
	I1206 09:11:20.608307  388517 kubeadm.go:319] 	--control-plane 
	I1206 09:11:20.608316  388517 kubeadm.go:319] 
	I1206 09:11:20.608391  388517 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:20.608397  388517 kubeadm.go:319] 
	I1206 09:11:20.608494  388517 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608638  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 
	I1206 09:11:20.609835  388517 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:20.609893  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:20.609910  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:20.611407  388517 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:11:20.612520  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:11:20.630100  388517 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:11:20.652382  388517 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:20.652515  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:20.652537  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-269722 minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-269722 minikube.k8s.io/primary=true
	I1206 09:11:20.694430  388517 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:20.784013  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.284280  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.784935  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.284329  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.784096  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.284134  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.784412  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.285006  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.365500  388517 kubeadm.go:1114] duration metric: took 3.713041621s to wait for elevateKubeSystemPrivileges
	I1206 09:11:24.365554  388517 kubeadm.go:403] duration metric: took 14.801837471s to StartCluster
	I1206 09:11:24.365583  388517 settings.go:142] acquiring lock: {Name:mk5046213dcb1abe0d7fe7b15722aa4884a98be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.365735  388517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:11:24.366166  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/kubeconfig: {Name:mka1b03c13e1e115a4ba1af8cb483b83d246825c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.366385  388517 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:11:24.366393  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:11:24.366467  388517 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:11:24.366579  388517 addons.go:70] Setting yakd=true in profile "addons-269722"
	I1206 09:11:24.366593  388517 addons.go:70] Setting inspektor-gadget=true in profile "addons-269722"
	I1206 09:11:24.366594  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366606  388517 addons.go:239] Setting addon yakd=true in "addons-269722"
	I1206 09:11:24.366612  388517 addons.go:239] Setting addon inspektor-gadget=true in "addons-269722"
	I1206 09:11:24.366637  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366644  388517 addons.go:70] Setting default-storageclass=true in profile "addons-269722"
	I1206 09:11:24.366651  388517 addons.go:70] Setting gcp-auth=true in profile "addons-269722"
	I1206 09:11:24.366663  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-269722"
	I1206 09:11:24.366682  388517 mustload.go:66] Loading cluster: addons-269722
	I1206 09:11:24.366726  388517 addons.go:70] Setting registry-creds=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting cloud-spanner=true in profile "addons-269722"
	I1206 09:11:24.366778  388517 addons.go:239] Setting addon registry-creds=true in "addons-269722"
	I1206 09:11:24.366781  388517 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-269722"
	I1206 09:11:24.366784  388517 addons.go:239] Setting addon cloud-spanner=true in "addons-269722"
	I1206 09:11:24.366787  388517 addons.go:70] Setting storage-provisioner=true in profile "addons-269722"
	I1206 09:11:24.366800  388517 addons.go:239] Setting addon storage-provisioner=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366818  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366819  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366821  388517 addons.go:70] Setting metrics-server=true in profile "addons-269722"
	I1206 09:11:24.366836  388517 addons.go:239] Setting addon metrics-server=true in "addons-269722"
	I1206 09:11:24.366850  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366901  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366979  388517 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.367005  388517 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-269722"
	I1206 09:11:24.367028  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367504  388517 addons.go:70] Setting registry=true in profile "addons-269722"
	I1206 09:11:24.367531  388517 addons.go:239] Setting addon registry=true in "addons-269722"
	I1206 09:11:24.367561  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367879  388517 addons.go:70] Setting ingress=true in profile "addons-269722"
	I1206 09:11:24.367904  388517 addons.go:239] Setting addon ingress=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367940  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367975  388517 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-269722"
	I1206 09:11:24.367998  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-269722"
	I1206 09:11:24.368012  388517 addons.go:70] Setting volcano=true in profile "addons-269722"
	I1206 09:11:24.368028  388517 addons.go:239] Setting addon volcano=true in "addons-269722"
	I1206 09:11:24.368051  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368065  388517 addons.go:70] Setting volumesnapshots=true in profile "addons-269722"
	I1206 09:11:24.368083  388517 addons.go:239] Setting addon volumesnapshots=true in "addons-269722"
	I1206 09:11:24.368108  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368318  388517 addons.go:70] Setting ingress-dns=true in profile "addons-269722"
	I1206 09:11:24.368334  388517 addons.go:239] Setting addon ingress-dns=true in "addons-269722"
	I1206 09:11:24.368504  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368582  388517 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-269722"
	I1206 09:11:24.368650  388517 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:24.368672  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368873  388517 out.go:179] * Verifying Kubernetes components...
	I1206 09:11:24.366646  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.370225  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:24.371769  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.373754  388517 addons.go:239] Setting addon default-storageclass=true in "addons-269722"
	I1206 09:11:24.373789  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.374301  388517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:11:24.374379  388517 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:11:24.375268  388517 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:11:24.375275  388517 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:11:24.375328  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:24.375343  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:11:24.376013  388517 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:11:24.376046  388517 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:24.376074  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:11:24.376035  388517 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:11:24.376134  388517 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-269722"
	I1206 09:11:24.376581  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.376790  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:11:24.376809  388517 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:11:24.376827  388517 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.376841  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:11:24.376847  388517 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:11:24.377596  388517 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:24.377612  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:11:24.378229  388517 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:11:24.378237  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:11:24.378252  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:11:24.378268  388517 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:11:24.378231  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.378298  388517 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1206 09:11:24.378904  388517 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:11:24.378904  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:11:24.378253  388517 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:11:24.379492  388517 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.379507  388517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:11:24.379650  388517 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:11:24.379665  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:11:24.379672  388517 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:24.379683  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:11:24.380334  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:24.380373  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:11:24.380344  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:11:24.380559  388517 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:11:24.380561  388517 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:11:24.381552  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:11:24.381577  388517 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1206 09:11:24.382302  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.382322  388517 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:24.382342  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:11:24.384119  388517 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1206 09:11:24.384134  388517 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:11:24.385853  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.386682  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:11:24.386986  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:24.387009  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:11:24.387404  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.387763  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.387799  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388004  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388093  388517 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:11:24.388126  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388701  388517 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:24.388724  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1206 09:11:24.389099  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.389150  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.389220  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:11:24.389288  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:24.389303  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:11:24.389924  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.389981  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390249  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390264  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390288  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390293  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390722  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.390908  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390941  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391542  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:11:24.391835  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.392214  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.392478  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.393141  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394085  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394128  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394319  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394473  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394510  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394522  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394539  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394585  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:11:24.394628  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394751  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.395613  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396316  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396359  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.396795  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.396833  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397321  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397322  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397417  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397434  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397472  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397481  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397505  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397761  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:11:24.397813  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397879  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398815  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:11:24.398876  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:11:24.398990  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399146  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399416  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399466  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399501  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399518  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399553  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399720  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.399930  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.400166  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.400198  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.400399  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.401986  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402373  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.402406  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402558  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	W1206 09:11:24.544745  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544776  388517 retry.go:31] will retry after 167.524935ms: ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.544834  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544842  388517 retry.go:31] will retry after 337.340492ms: ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.586807  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.586836  388517 retry.go:31] will retry after 361.026308ms: ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.720251  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:24.720260  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:11:24.915042  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.943642  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.946926  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:25.098136  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:25.119770  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:11:25.119795  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:11:25.208175  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:25.224407  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:11:25.224432  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:11:25.225309  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:25.232666  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:11:25.232682  388517 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:11:25.246755  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:11:25.246777  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:11:25.247663  388517 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:11:25.247683  388517 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:11:25.270838  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:25.331361  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:25.449965  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:25.469046  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:25.613424  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:11:25.613456  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:11:25.633923  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:11:25.633954  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:11:25.657079  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:11:25.657110  388517 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:11:25.695667  388517 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:25.695693  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:11:25.696553  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:25.756474  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:11:25.756502  388517 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:11:26.160704  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:11:26.160736  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:11:26.284633  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:11:26.284662  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:11:26.286985  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:26.434395  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:11:26.434422  388517 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:11:26.465197  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.465225  388517 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:11:26.661217  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.661249  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:11:26.705778  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.774501  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:11:26.774527  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:11:26.849719  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.906080  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:11:26.906136  388517 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:11:27.000268  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:11:27.000294  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:11:27.610778  388517 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:27.610815  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:11:27.800583  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:11:27.800607  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:11:27.882544  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:28.272413  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:11:28.272451  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:11:28.298383  388517 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.578087161s)
	I1206 09:11:28.298435  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.38335524s)
	I1206 09:11:28.298380  388517 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.578018491s)
	I1206 09:11:28.298514  388517 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 09:11:28.298551  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.354877639s)
	I1206 09:11:28.298640  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.351685893s)
	I1206 09:11:28.299174  388517 node_ready.go:35] waiting up to 6m0s for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373103  388517 node_ready.go:49] node "addons-269722" is "Ready"
	I1206 09:11:28.373131  388517 node_ready.go:38] duration metric: took 73.939285ms for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373146  388517 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:28.373191  388517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:28.564603  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:11:28.564627  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:11:28.805525  388517 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-269722" context rescaled to 1 replicas
	I1206 09:11:28.892887  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:11:28.892912  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:11:29.154236  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:29.154271  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:11:29.383179  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:31.838578  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.740399341s)
	I1206 09:11:31.842964  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:11:31.846059  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846625  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:31.846661  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846877  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:32.206384  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:11:32.398884  388517 addons.go:239] Setting addon gcp-auth=true in "addons-269722"
	I1206 09:11:32.398959  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:32.401192  388517 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:11:32.404036  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404508  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:32.404543  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404739  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:33.380508  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.172285689s)
	I1206 09:11:33.380567  388517 addons.go:495] Verifying addon ingress=true in "addons-269722"
	I1206 09:11:33.380566  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.155226513s)
	I1206 09:11:33.380618  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.109753242s)
	I1206 09:11:33.382778  388517 out.go:179] * Verifying ingress addon...
	I1206 09:11:33.384997  388517 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:11:33.394151  388517 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:11:33.394167  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:33.983745  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.442405  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.961428  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.544843  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.959086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.477596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.933661  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.492983  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.907682  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.464342  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.476878  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.145459322s)
	I1206 09:11:38.476953  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.026949113s)
	I1206 09:11:38.477048  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.007974684s)
	I1206 09:11:38.477116  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.780538742s)
	I1206 09:11:38.477233  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.190220804s)
	I1206 09:11:38.477253  388517 addons.go:495] Verifying addon registry=true in "addons-269722"
	I1206 09:11:38.477312  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.77149962s)
	I1206 09:11:38.477336  388517 addons.go:495] Verifying addon metrics-server=true in "addons-269722"
	I1206 09:11:38.477363  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.627610125s)
	I1206 09:11:38.477525  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.594927288s)
	I1206 09:11:38.477544  388517 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.104332654s)
	I1206 09:11:38.477571  388517 api_server.go:72] duration metric: took 14.11116064s to wait for apiserver process to appear ...
	I1206 09:11:38.477583  388517 api_server.go:88] waiting for apiserver healthz status ...
	W1206 09:11:38.477581  388517 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477604  388517 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1206 09:11:38.477604  388517 retry.go:31] will retry after 298.178363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477795  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.094573264s)
	I1206 09:11:38.477823  388517 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:38.477842  388517 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.076624226s)
	I1206 09:11:38.478884  388517 out.go:179] * Verifying registry addon...
	I1206 09:11:38.478890  388517 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-269722 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:11:38.479684  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:38.479686  388517 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:11:38.481128  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:11:38.482570  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:11:38.482875  388517 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:11:38.483935  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:11:38.483956  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:11:38.542927  388517 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1206 09:11:38.560082  388517 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:11:38.560109  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:38.560250  388517 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:11:38.560266  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:38.564812  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:11:38.564836  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:11:38.577730  388517 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:38.577765  388517 api_server.go:131] duration metric: took 100.173477ms to wait for apiserver health ...
	I1206 09:11:38.577777  388517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:38.641466  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.641493  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:11:38.668346  388517 system_pods.go:59] 20 kube-system pods found
	I1206 09:11:38.668390  388517 system_pods.go:61] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.668407  388517 system_pods.go:61] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.668417  388517 system_pods.go:61] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.668435  388517 system_pods.go:61] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.668450  388517 system_pods.go:61] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.668460  388517 system_pods.go:61] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.668469  388517 system_pods.go:61] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.668476  388517 system_pods.go:61] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.668484  388517 system_pods.go:61] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.668493  388517 system_pods.go:61] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.668501  388517 system_pods.go:61] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.668508  388517 system_pods.go:61] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.668520  388517 system_pods.go:61] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.668526  388517 system_pods.go:61] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.668535  388517 system_pods.go:61] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.668543  388517 system_pods.go:61] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.668558  388517 system_pods.go:61] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.668574  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668644  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668650  388517 system_pods.go:61] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.668660  388517 system_pods.go:74] duration metric: took 90.874732ms to wait for pod list to return data ...
	I1206 09:11:38.668672  388517 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:38.705679  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.776568  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:38.781850  388517 default_sa.go:45] found service account: "default"
	I1206 09:11:38.781885  388517 default_sa.go:55] duration metric: took 113.206818ms for default service account to be created ...
	I1206 09:11:38.781896  388517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:38.893236  388517 system_pods.go:86] 20 kube-system pods found
	I1206 09:11:38.893269  388517 system_pods.go:89] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.893310  388517 system_pods.go:89] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.893318  388517 system_pods.go:89] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.893328  388517 system_pods.go:89] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.893334  388517 system_pods.go:89] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.893340  388517 system_pods.go:89] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.893344  388517 system_pods.go:89] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.893348  388517 system_pods.go:89] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.893352  388517 system_pods.go:89] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.893357  388517 system_pods.go:89] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.893361  388517 system_pods.go:89] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.893364  388517 system_pods.go:89] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.893369  388517 system_pods.go:89] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.893374  388517 system_pods.go:89] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.893379  388517 system_pods.go:89] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.893383  388517 system_pods.go:89] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.893389  388517 system_pods.go:89] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.893395  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893400  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893403  388517 system_pods.go:89] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.893410  388517 system_pods.go:126] duration metric: took 111.509411ms to wait for k8s-apps to be running ...
	I1206 09:11:38.893420  388517 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:11:38.893463  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:39.039991  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.105053  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.105115  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.435086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.577305  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.578361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.891557  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.023055  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.023335  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.299367  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.593645009s)
	I1206 09:11:40.300442  388517 addons.go:495] Verifying addon gcp-auth=true in "addons-269722"
	I1206 09:11:40.302591  388517 out.go:179] * Verifying gcp-auth addon...
	I1206 09:11:40.304667  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:11:40.334052  388517 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:11:40.334086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.389629  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.490307  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.490431  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.813628  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.836756  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.060127251s)
	I1206 09:11:40.836796  388517 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.943309249s)
	I1206 09:11:40.836822  388517 system_svc.go:56] duration metric: took 1.943395217s WaitForService to wait for kubelet
	I1206 09:11:40.836835  388517 kubeadm.go:587] duration metric: took 16.470422509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:40.836870  388517 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:11:40.843939  388517 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 09:11:40.843963  388517 node_conditions.go:123] node cpu capacity is 2
	I1206 09:11:40.843980  388517 node_conditions.go:105] duration metric: took 7.101649ms to run NodePressure ...
	I1206 09:11:40.844002  388517 start.go:242] waiting for startup goroutines ...
	I1206 09:11:40.890430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.986853  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.992475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.355777  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.389062  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.487963  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.489146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.808891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.889779  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.985833  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.987429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.308166  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.409444  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.510304  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.511035  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.809432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.888458  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.984315  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.987586  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.308446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.388943  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.496391  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.496607  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.808230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.888549  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.984398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.986840  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.312899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.514152  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.514383  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.515204  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.811435  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.888384  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.984563  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.986735  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.307401  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.388721  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.486271  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.488952  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.808083  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.888466  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.985838  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.987005  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.309162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.390486  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.484411  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.486023  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.809473  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.888547  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.984691  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.987824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.308194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.388621  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.488407  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.488489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.808350  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.984429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.986654  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.308303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.391026  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.664162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.666762  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.808417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.888241  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.983979  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.986690  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.308241  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.388925  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.484568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.486742  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.809515  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.889646  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.987428  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.988527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.366787  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.389057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.486489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.487907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.810176  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.910430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.984648  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.992028  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.319081  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.388999  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.489012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:51.492499  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.808942  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.896270  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.990446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.992371  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.309057  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.389352  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.484414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.486682  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.809190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.888338  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.991907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.992417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.307785  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.390249  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.484717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.486614  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:53.810677  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.889084  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.987650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.990484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.315414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.395125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.494235  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.494236  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.824289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.888711  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.984659  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.987146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.308481  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.390618  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.484329  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.485893  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.809298  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.895192  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.989404  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.993237  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.311289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.389393  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.487349  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.487525  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.808606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.889213  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.985510  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.991535  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.308723  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.388636  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.488790  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:57.490213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.809073  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.887830  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.984304  388517 kapi.go:107] duration metric: took 19.503171238s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:11:57.987671  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.309052  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.389257  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:58.490899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.809457  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.890577  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.025290  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.309296  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.392111  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.492783  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.807475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.892512  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.986432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.357752  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.391649  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.485367  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.809392  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.887883  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.986127  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.312877  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.413507  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.486873  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.809042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.889057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.986042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.311892  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.390027  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.491375  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.923841  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.927183  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.986095  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.309017  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.390050  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.486194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.812456  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.892317  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.986695  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.308544  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.389102  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.486496  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.810301  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.986924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.308837  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.390825  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.485772  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.807540  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.888733  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.985799  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.310889  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.389329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.492425  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.808561  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.888635  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.985484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.309758  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.390275  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.486771  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.807681  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.888485  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.987584  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.309272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.388617  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.487646  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.809312  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.888519  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.988459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.309597  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.411374  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:09.487378  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.812712  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.912033  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.012090  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.308609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.389736  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.488553  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.808609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.893781  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.986159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.669172  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.670324  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.671190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.811594  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.892535  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.985928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.310097  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.390596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.489116  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.809321  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.890619  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.987653  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.309120  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.388316  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.488650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.808316  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.889333  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.986213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.308276  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.388283  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.487207  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.808143  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.888955  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.986279  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.309037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.388329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.488214  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.810501  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.896511  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.986845  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.307928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.390728  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.485976  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.816944  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.970568  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.988372  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.312911  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.390564  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.486836  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.811792  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.891576  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.988049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.309919  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.388844  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.486086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.809596  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.890914  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.986230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.310480  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.410702  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.486633  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.807918  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.888811  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.987072  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.309606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.412057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.512925  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.817199  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.949254  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.990626  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.312159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.389204  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.488639  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.810891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.888759  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.988415  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.309245  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.391268  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.486340  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.808382  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.889770  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.988997  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.309823  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.388910  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.489579  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.810562  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.889125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.986750  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.308898  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.389306  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.486339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.809381  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.888322  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.987056  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.309252  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.388372  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.486924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.810099  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.891569  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.993945  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.314253  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.503975  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.504104  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.811809  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.889063  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.990570  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.308661  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.388783  388517 kapi.go:107] duration metric: took 54.003785227s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:12:27.539433  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.808824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.987339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.311281  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.487383  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.810397  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.990303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.309345  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.488470  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.811844  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.987408  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.311108  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.487049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.807650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.986406  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.309915  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.486400  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.814032  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.989103  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.311817  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.486527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.808601  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.989352  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.309084  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.486427  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.809272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.986717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:34.308891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:34.486989  388517 kapi.go:107] duration metric: took 56.004420234s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:12:34.808808  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.310012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.808588  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.309169  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.808993  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.310066  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.808459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.308629  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.811741  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.309361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.809037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.308704  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.808398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.307791  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.808294  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.308956  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.809502  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.307669  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.810175  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.309568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.809320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.309320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.807962  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.311821  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.808138  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:47.308750  388517 kapi.go:107] duration metric: took 1m7.004080739s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:12:47.309965  388517 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-269722 cluster.
	I1206 09:12:47.310907  388517 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:12:47.312086  388517 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:12:47.313288  388517 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, registry-creds, volcano, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1206 09:12:47.314294  388517 addons.go:530] duration metric: took 1m22.947828238s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner inspektor-gadget registry-creds volcano cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1206 09:12:47.314341  388517 start.go:247] waiting for cluster config update ...
	I1206 09:12:47.314373  388517 start.go:256] writing updated cluster config ...
	I1206 09:12:47.314678  388517 ssh_runner.go:195] Run: rm -f paused
	I1206 09:12:47.321984  388517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:47.325938  388517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.331363  388517 pod_ready.go:94] pod "coredns-66bc5c9577-l7sr8" is "Ready"
	I1206 09:12:47.331382  388517 pod_ready.go:86] duration metric: took 5.423953ms for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.333935  388517 pod_ready.go:83] waiting for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.339670  388517 pod_ready.go:94] pod "etcd-addons-269722" is "Ready"
	I1206 09:12:47.339686  388517 pod_ready.go:86] duration metric: took 5.735911ms for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.341852  388517 pod_ready.go:83] waiting for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.348825  388517 pod_ready.go:94] pod "kube-apiserver-addons-269722" is "Ready"
	I1206 09:12:47.348841  388517 pod_ready.go:86] duration metric: took 6.965989ms for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.351661  388517 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.728666  388517 pod_ready.go:94] pod "kube-controller-manager-addons-269722" is "Ready"
	I1206 09:12:47.728694  388517 pod_ready.go:86] duration metric: took 377.017246ms for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.928250  388517 pod_ready.go:83] waiting for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.326318  388517 pod_ready.go:94] pod "kube-proxy-c2km9" is "Ready"
	I1206 09:12:48.326347  388517 pod_ready.go:86] duration metric: took 398.070754ms for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.527945  388517 pod_ready.go:83] waiting for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925436  388517 pod_ready.go:94] pod "kube-scheduler-addons-269722" is "Ready"
	I1206 09:12:48.925477  388517 pod_ready.go:86] duration metric: took 397.504009ms for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925497  388517 pod_ready.go:40] duration metric: took 1.603486959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:48.968795  388517 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:12:48.970523  388517 out.go:179] * Done! kubectl is now configured to use "addons-269722" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	557d40ab6aa66       56cc512116c8f       6 minutes ago       Running             busybox                                  0                   09f2b56f9baa0       busybox                                    default
	29c2d038bf437       738351fd438f0       13 minutes ago      Running             csi-snapshotter                          0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	5d8ecc80d5382       931dbfd16f87c       13 minutes ago      Running             csi-provisioner                          0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	fd0e1a7571386       e899260153aed       13 minutes ago      Running             liveness-probe                           0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	24d11a8b11e79       e255e073c508c       13 minutes ago      Running             hostpath                                 0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	5ca832afab7b5       88ef14a257f42       13 minutes ago      Running             node-driver-registrar                    0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	c4cccebac4fc4       97fe896f8c07b       13 minutes ago      Running             controller                               0                   9ee054c3901ad       ingress-nginx-controller-6c8bf45fb-ndk8c   ingress-nginx
	2630d4a83ae5f       19a639eda60f0       13 minutes ago      Running             csi-resizer                              0                   a312cf43898ad       csi-hostpath-resizer-0                     kube-system
	5bd7e91038ad6       a1ed5895ba635       13 minutes ago      Running             csi-external-health-monitor-controller   0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	1ff38ec18e78f       59cbb42146a37       13 minutes ago      Running             csi-attacher                             0                   73074a1a93680       csi-hostpath-attacher-0                    kube-system
	278c91c11ce27       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   4bcff1b74bfec       snapshot-controller-7d9fbc56b8-qbp6w       kube-system
	31ec84f4556b1       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   49c8968cc1ce1       snapshot-controller-7d9fbc56b8-v9sd5       kube-system
	864a2ecb4396f       884bd0ac01c8f       13 minutes ago      Exited              patch                                    0                   3ddf53bb8795f       ingress-nginx-admission-patch-xpn6k        ingress-nginx
	c2e7e0b7588b1       884bd0ac01c8f       13 minutes ago      Exited              create                                   0                   1ca23ac12776f       ingress-nginx-admission-create-kl75g       ingress-nginx
	2774623c95b6c       b6ab53fbfedaa       13 minutes ago      Running             minikube-ingress-dns                     0                   a84f9f0b8a344       kube-ingress-dns-minikube                  kube-system
	d9e6d13d8e418       d5e667c0f2bb6       14 minutes ago      Running             amd-gpu-device-plugin                    0                   479fca73c33e3       amd-gpu-device-plugin-4x5bp                kube-system
	a9394a7445ed6       6e38f40d628db       14 minutes ago      Running             storage-provisioner                      0                   89b1f84c8945f       storage-provisioner                        kube-system
	e636e6172c8c9       52546a367cc9e       14 minutes ago      Running             coredns                                  0                   18cf9f60905af       coredns-66bc5c9577-l7sr8                   kube-system
	d9ab1c94b0adc       8aa150647e88a       14 minutes ago      Running             kube-proxy                               0                   7ce46fc8fe779       kube-proxy-c2km9                           kube-system
	f7319b640fed7       a3e246e9556e9       14 minutes ago      Running             etcd                                     0                   5d2b5e40c2235       etcd-addons-269722                         kube-system
	31363d509c1e7       88320b5498ff2       14 minutes ago      Running             kube-scheduler                           0                   f53f47f2f0dc9       kube-scheduler-addons-269722               kube-system
	c301895eb03e7       01e8bacf0f500       14 minutes ago      Running             kube-controller-manager                  0                   afc5069ef7820       kube-controller-manager-addons-269722      kube-system
	95341ea890f7a       a5f569d49a979       14 minutes ago      Running             kube-apiserver                           0                   fb1d3f9401a55       kube-apiserver-addons-269722               kube-system
	
	
	==> containerd <==
	Dec 06 09:25:11 addons-269722 containerd[831]: time="2025-12-06T09:25:11.855954417Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:25:11 addons-269722 containerd[831]: time="2025-12-06T09:25:11.856036556Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.812451036Z" level=info msg="RemoveContainer for \"2850465598faa66ce3163cb73dbef15eaa07b7497f4ea24dc9a077741676620a\""
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.819362532Z" level=info msg="RemoveContainer for \"2850465598faa66ce3163cb73dbef15eaa07b7497f4ea24dc9a077741676620a\" returns successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.821991754Z" level=info msg="StopPodSandbox for \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\""
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.850947012Z" level=info msg="TearDown network for sandbox \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\" successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.850990891Z" level=info msg="StopPodSandbox for \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\" returns successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.851377287Z" level=info msg="RemovePodSandbox for \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\""
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.851424173Z" level=info msg="Forcibly stopping sandbox \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\""
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.878430215Z" level=info msg="TearDown network for sandbox \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\" successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.884750016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.884898481Z" level=info msg="RemovePodSandbox \"3e10d3adfa6101feed17a6216e24c0c3c5fcd4c5161a199232c60f2c0d9eaad7\" returns successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.885459805Z" level=info msg="StopPodSandbox for \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\""
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.912671636Z" level=info msg="TearDown network for sandbox \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\" successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.912717584Z" level=info msg="StopPodSandbox for \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\" returns successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.913667954Z" level=info msg="RemovePodSandbox for \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\""
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.913773219Z" level=info msg="Forcibly stopping sandbox \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\""
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.943432371Z" level=info msg="TearDown network for sandbox \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\" successfully"
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.949312664Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Dec 06 09:25:21 addons-269722 containerd[831]: time="2025-12-06T09:25:21.949375560Z" level=info msg="RemovePodSandbox \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\" returns successfully"
	Dec 06 09:25:36 addons-269722 containerd[831]: time="2025-12-06T09:25:36.923482235Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Dec 06 09:25:36 addons-269722 containerd[831]: time="2025-12-06T09:25:36.926151533Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:25:37 addons-269722 containerd[831]: time="2025-12-06T09:25:37.180835139Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:25:37 addons-269722 containerd[831]: time="2025-12-06T09:25:37.849871313Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:25:37 addons-269722 containerd[831]: time="2025-12-06T09:25:37.849974577Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	
	
	==> coredns [e636e6172c8c93ebe7783047ae4449227f6f37f80a082ff4fd383ebc5d08fdbe] <==
	[INFO] 10.244.0.8:51474 - 59613 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000209051s
	[INFO] 10.244.0.8:51474 - 58064 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00013173s
	[INFO] 10.244.0.8:51474 - 29072 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084614s
	[INFO] 10.244.0.8:51474 - 28407 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124845s
	[INFO] 10.244.0.8:51474 - 5185 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106747s
	[INFO] 10.244.0.8:51474 - 28903 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097914s
	[INFO] 10.244.0.8:51474 - 44135 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000086701s
	[INFO] 10.244.0.8:42198 - 56025 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124465s
	[INFO] 10.244.0.8:42198 - 58448 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118323s
	[INFO] 10.244.0.8:40240 - 52465 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104193s
	[INFO] 10.244.0.8:40240 - 52746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113004s
	[INFO] 10.244.0.8:49362 - 65347 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126485s
	[INFO] 10.244.0.8:49362 - 110 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216341s
	[INFO] 10.244.0.8:51040 - 59068 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087119s
	[INFO] 10.244.0.8:51040 - 59346 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118565s
	[INFO] 10.244.0.27:48228 - 49165 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000319642s
	[INFO] 10.244.0.27:40396 - 12915 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001198011s
	[INFO] 10.244.0.27:39038 - 53409 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158695s
	[INFO] 10.244.0.27:59026 - 7807 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134321s
	[INFO] 10.244.0.27:32836 - 36351 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085705s
	[INFO] 10.244.0.27:33578 - 24448 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114082s
	[INFO] 10.244.0.27:49566 - 16674 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003361826s
	[INFO] 10.244.0.27:37372 - 21961 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004334216s
	[INFO] 10.244.0.31:37715 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000570157s
	[INFO] 10.244.0.31:57352 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162981s
	
	
	==> describe nodes <==
	Name:               addons-269722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-269722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-269722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-269722
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-269722"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-269722
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:25:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    addons-269722
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 faaa974faf9d46f8a3b502afcdf78e43
	  System UUID:                faaa974f-af9d-46f8-a3b5-02afcdf78e43
	  Boot ID:                    33004088-aa48-42d5-ac29-91fbfe5a6c68
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-ndk8c    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         14m
	  kube-system                 amd-gpu-device-plugin-4x5bp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-l7sr8                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-c5bss                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-269722                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-269722                250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-269722       200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-c2km9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-269722                100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-qbp6w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-v9sd5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m                kubelet          Node addons-269722 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node addons-269722 event: Registered Node addons-269722 in Controller
	
	
	==> dmesg <==
	[  +9.726100] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.920301] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 09:12] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.529426] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.897097] kauditd_printk_skb: 166 callbacks suppressed
	[  +2.318976] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.568626] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.319087] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.000694] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 53 callbacks suppressed
	[Dec 6 09:18] kauditd_printk_skb: 47 callbacks suppressed
	[ +48.658661] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 6 09:19] kauditd_printk_skb: 67 callbacks suppressed
	[ +10.881930] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000283] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.748225] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 37 callbacks suppressed
	[  +3.911192] kauditd_printk_skb: 177 callbacks suppressed
	[  +1.378691] kauditd_printk_skb: 126 callbacks suppressed
	[Dec 6 09:21] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.000310] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 6 09:23] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 6 09:24] kauditd_printk_skb: 9 callbacks suppressed
	[ +20.422822] kauditd_printk_skb: 26 callbacks suppressed
	[ +24.965389] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [f7319b640fed7119b3d158c30e3bc2dd128fc0442cd17b3131fd715d76a44c9a] <==
	{"level":"warn","ts":"2025-12-06T09:12:02.912205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.029086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:02.912295Z","caller":"traceutil/trace.go:172","msg":"trace[2137904910] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"113.394912ms","start":"2025-12-06T09:12:02.798891Z","end":"2025-12-06T09:12:02.912286Z","steps":["trace[2137904910] 'agreement among raft nodes before linearized reading'  (duration: 112.675357ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:02.912373Z","caller":"traceutil/trace.go:172","msg":"trace[1452242551] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"297.5818ms","start":"2025-12-06T09:12:02.614786Z","end":"2025-12-06T09:12:02.912368Z","steps":["trace[1452242551] 'process raft request'  (duration: 296.713443ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:11.568524Z","caller":"traceutil/trace.go:172","msg":"trace[574553895] linearizableReadLoop","detail":"{readStateIndex:1209; appliedIndex:1209; }","duration":"261.730098ms","start":"2025-12-06T09:12:11.306778Z","end":"2025-12-06T09:12:11.568508Z","steps":["trace[574553895] 'read index received'  (duration: 261.726617ms)","trace[574553895] 'applied index is now lower than readState.Index'  (duration: 3.06µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.826038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.650735Z","caller":"traceutil/trace.go:172","msg":"trace[1035894961] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1184; }","duration":"343.946028ms","start":"2025-12-06T09:12:11.306774Z","end":"2025-12-06T09:12:11.650720Z","steps":["trace[1035894961] 'agreement among raft nodes before linearized reading'  (duration: 261.814135ms)","trace[1035894961] 'range keys from in-memory index tree'  (duration: 81.970543ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650785Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.306763Z","time spent":"344.009881ms","remote":"127.0.0.1:53040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:12:11.651140Z","caller":"traceutil/trace.go:172","msg":"trace[483765702] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"350.392753ms","start":"2025-12-06T09:12:11.300733Z","end":"2025-12-06T09:12:11.651125Z","steps":["trace[483765702] 'process raft request'  (duration: 267.904896ms)","trace[483765702] 'compare'  (duration: 81.445642ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.651205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.300717Z","time spent":"350.449818ms","remote":"127.0.0.1:53164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<>"}
	{"level":"warn","ts":"2025-12-06T09:12:11.651419Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.416676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651477Z","caller":"traceutil/trace.go:172","msg":"trace[167194031] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"172.473278ms","start":"2025-12-06T09:12:11.478992Z","end":"2025-12-06T09:12:11.651465Z","steps":["trace[167194031] 'agreement among raft nodes before linearized reading'  (duration: 172.38943ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.385049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651660Z","caller":"traceutil/trace.go:172","msg":"trace[1143122093] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"270.440925ms","start":"2025-12-06T09:12:11.381211Z","end":"2025-12-06T09:12:11.651652Z","steps":["trace[1143122093] 'agreement among raft nodes before linearized reading'  (duration: 270.367937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651812Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.784519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651836Z","caller":"traceutil/trace.go:172","msg":"trace[535987253] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1185; }","duration":"298.810243ms","start":"2025-12-06T09:12:11.353018Z","end":"2025-12-06T09:12:11.651829Z","steps":["trace[535987253] 'agreement among raft nodes before linearized reading'  (duration: 298.76303ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:20.929795Z","caller":"traceutil/trace.go:172","msg":"trace[628627548] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"105.667962ms","start":"2025-12-06T09:12:20.824110Z","end":"2025-12-06T09:12:20.929778Z","steps":["trace[628627548] 'process raft request'  (duration: 105.596429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:23.778852Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.603155ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:23.779380Z","caller":"traceutil/trace.go:172","msg":"trace[424992269] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1281; }","duration":"218.131424ms","start":"2025-12-06T09:12:23.561231Z","end":"2025-12-06T09:12:23.779363Z","steps":["trace[424992269] 'range keys from in-memory index tree'  (duration: 217.594054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:26.494846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.642654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:26.495325Z","caller":"traceutil/trace.go:172","msg":"trace[102060551] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1290; }","duration":"113.581468ms","start":"2025-12-06T09:12:26.381729Z","end":"2025-12-06T09:12:26.495310Z","steps":["trace[102060551] 'range keys from in-memory index tree'  (duration: 112.580581ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:13:20.713154Z","caller":"traceutil/trace.go:172","msg":"trace[1259088558] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"103.020152ms","start":"2025-12-06T09:13:20.609588Z","end":"2025-12-06T09:13:20.712608Z","steps":["trace[1259088558] 'process raft request'  (duration: 102.875042ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:18:37.035751Z","caller":"traceutil/trace.go:172","msg":"trace[10856222] transaction","detail":"{read_only:false; response_revision:2013; number_of_response:1; }","duration":"171.241207ms","start":"2025-12-06T09:18:36.864442Z","end":"2025-12-06T09:18:37.035683Z","steps":["trace[10856222] 'process raft request'  (duration: 170.245197ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:21:15.400819Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1792}
	{"level":"info","ts":"2025-12-06T09:21:15.571936Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1792,"took":"167.85031ms","hash":732329829,"current-db-size-bytes":10612736,"current-db-size":"11 MB","current-db-size-in-use-bytes":7192576,"current-db-size-in-use":"7.2 MB"}
	{"level":"info","ts":"2025-12-06T09:21:15.572113Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":732329829,"revision":1792,"compact-revision":-1}
	
	
	==> kernel <==
	 09:25:42 up 14 min,  0 users,  load average: 0.48, 0.61, 0.61
	Linux addons-269722 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [95341ea890f7aa882f4bc2a6906002451241d8c5faa071707f5de92b27e20ce7] <==
	I1206 09:18:53.018886       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1206 09:18:53.078403       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1206 09:18:53.122449       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1206 09:18:53.135000       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1206 09:18:53.224059       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1206 09:18:53.568132       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1206 09:18:53.690499       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1206 09:18:53.805719       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1206 09:18:53.827769       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1206 09:18:53.952581       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1206 09:18:54.124471       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1206 09:18:54.247215       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	I1206 09:18:54.320955       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1206 09:18:54.331045       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1206 09:18:54.378594       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1206 09:18:55.332669       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1206 09:18:55.731925       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1206 09:19:11.494382       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:53920: use of closed network connection
	E1206 09:19:11.675647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:53948: use of closed network connection
	I1206 09:19:20.977935       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.33.16"}
	E1206 09:19:32.614060       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:59722: use of closed network connection
	I1206 09:19:42.464058       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 09:19:42.639393       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.3.234"}
	I1206 09:19:58.097224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1206 09:21:17.002283       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c301895eb03e76a7f98c21fd67491f3e3114e008ac0bc660fb3871dde69fdff8] <==
	E1206 09:24:49.402388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:54.039663       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:24:57.019984       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:57.021433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:57.240919       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:57.242078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:09.040311       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:25:11.148420       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:11.149668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:13.932774       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:13.933936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:17.717163       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:17.718802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:22.911168       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:22.912342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:23.532462       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:23.533429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:24.040635       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:25:27.831947       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:27.833476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:33.881365       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:33.882466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:25:39.041526       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:25:41.695484       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:25:41.697353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [d9ab1c94b0adcd19eace1b7a10c0f065d7c953fc676839d82393eaab4f0c1819] <==
	I1206 09:11:27.430778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:11:27.531232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:11:27.531444       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.220"]
	E1206 09:11:27.531895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:11:27.678473       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:11:27.678923       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:11:27.679749       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:11:27.716021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:11:27.719059       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:11:27.719117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:27.726703       1 config.go:200] "Starting service config controller"
	I1206 09:11:27.726733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:11:27.726750       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:11:27.726754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:11:27.730726       1 config.go:309] "Starting node config controller"
	I1206 09:11:27.730967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:11:27.730985       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:11:27.726764       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:11:27.736817       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:11:27.827489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:11:27.827527       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:11:27.837415       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [31363d509c1e784ea3123303af98a26bde6cf40b74abff49509bf33b99ca8f00] <==
	E1206 09:11:17.083720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:11:17.083797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:17.083954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:17.085026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:17.085610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:11:17.085977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:17.086442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:17.086495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:17.086552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:11:17.086667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:17.086930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:11:17.939163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:11:17.952354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:11:17.975596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:11:18.009464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:18.049043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:18.084056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:18.094385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:18.198477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:18.257306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:18.287686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:11:18.314012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:18.315115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:11:18.580055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:11:21.477327       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:24:52 addons-269722 kubelet[1529]: I1206 09:24:52.815841    1529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa5b3b82-29e5-4702-ac8a-2506e1077879-config-volume\") pod \"fa5b3b82-29e5-4702-ac8a-2506e1077879\" (UID: \"fa5b3b82-29e5-4702-ac8a-2506e1077879\") "
	Dec 06 09:24:52 addons-269722 kubelet[1529]: I1206 09:24:52.815909    1529 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skfpm\" (UniqueName: \"kubernetes.io/projected/fa5b3b82-29e5-4702-ac8a-2506e1077879-kube-api-access-skfpm\") pod \"fa5b3b82-29e5-4702-ac8a-2506e1077879\" (UID: \"fa5b3b82-29e5-4702-ac8a-2506e1077879\") "
	Dec 06 09:24:52 addons-269722 kubelet[1529]: I1206 09:24:52.818178    1529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa5b3b82-29e5-4702-ac8a-2506e1077879-config-volume" (OuterVolumeSpecName: "config-volume") pod "fa5b3b82-29e5-4702-ac8a-2506e1077879" (UID: "fa5b3b82-29e5-4702-ac8a-2506e1077879"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 06 09:24:52 addons-269722 kubelet[1529]: I1206 09:24:52.822150    1529 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5b3b82-29e5-4702-ac8a-2506e1077879-kube-api-access-skfpm" (OuterVolumeSpecName: "kube-api-access-skfpm") pod "fa5b3b82-29e5-4702-ac8a-2506e1077879" (UID: "fa5b3b82-29e5-4702-ac8a-2506e1077879"). InnerVolumeSpecName "kube-api-access-skfpm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 09:24:52 addons-269722 kubelet[1529]: I1206 09:24:52.916797    1529 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa5b3b82-29e5-4702-ac8a-2506e1077879-config-volume\") on node \"addons-269722\" DevicePath \"\""
	Dec 06 09:24:52 addons-269722 kubelet[1529]: I1206 09:24:52.916833    1529 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-skfpm\" (UniqueName: \"kubernetes.io/projected/fa5b3b82-29e5-4702-ac8a-2506e1077879-kube-api-access-skfpm\") on node \"addons-269722\" DevicePath \"\""
	Dec 06 09:24:52 addons-269722 kubelet[1529]: E1206 09:24:52.922645    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:24:53 addons-269722 kubelet[1529]: I1206 09:24:53.925898    1529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa5b3b82-29e5-4702-ac8a-2506e1077879" path="/var/lib/kubelet/pods/fa5b3b82-29e5-4702-ac8a-2506e1077879/volumes"
	Dec 06 09:24:55 addons-269722 kubelet[1529]: E1206 09:24:55.922956    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:25:06 addons-269722 kubelet[1529]: E1206 09:25:06.923568    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:25:07 addons-269722 kubelet[1529]: I1206 09:25:07.922665    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4x5bp" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:25:11 addons-269722 kubelet[1529]: E1206 09:25:11.856430    1529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:25:11 addons-269722 kubelet[1529]: E1206 09:25:11.856482    1529 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:25:11 addons-269722 kubelet[1529]: E1206 09:25:11.856562    1529 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(955ebad4-b055-4cbf-95e3-243af9483d37): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:25:11 addons-269722 kubelet[1529]: E1206 09:25:11.856599    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:25:11 addons-269722 kubelet[1529]: I1206 09:25:11.922377    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l7sr8" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:25:21 addons-269722 kubelet[1529]: I1206 09:25:21.809657    1529 scope.go:117] "RemoveContainer" containerID="2850465598faa66ce3163cb73dbef15eaa07b7497f4ea24dc9a077741676620a"
	Dec 06 09:25:21 addons-269722 kubelet[1529]: E1206 09:25:21.924578    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:25:26 addons-269722 kubelet[1529]: E1206 09:25:26.921770    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:25:31 addons-269722 kubelet[1529]: I1206 09:25:31.922705    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:25:37 addons-269722 kubelet[1529]: E1206 09:25:37.850143    1529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:25:37 addons-269722 kubelet[1529]: E1206 09:25:37.850182    1529 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:25:37 addons-269722 kubelet[1529]: E1206 09:25:37.850329    1529 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(9f2e16bd-5c5a-4de7-8925-9e8608d94e2b): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:25:37 addons-269722 kubelet[1529]: E1206 09:25:37.850368    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:25:37 addons-269722 kubelet[1529]: E1206 09:25:37.922208    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	
	
	==> storage-provisioner [a9394a7445ed60a376c7cd3e75aaac67b588412df8710faeea1ea9b282a9b119] <==
	W1206 09:25:17.536855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:19.540823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:19.547008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:21.551342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:21.558721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:23.562588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:23.568123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:25.572412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:25.577215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:27.581537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:27.590073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:29.594811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:29.605032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:31.609398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:31.614962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:33.620711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:33.628399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:35.632227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:35.637184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:37.641647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:37.648232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:39.651750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:39.658749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:41.662554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:25:41.670929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-269722 -n addons-269722
helpers_test.go:269: (dbg) Run:  kubectl --context addons-269722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k: exit status 1 (82.83566ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-269722/192.168.39.220
	Start Time:       Sat, 06 Dec 2025 09:19:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tppjg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tppjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m1s                   default-scheduler  Successfully assigned default/nginx to addons-269722
	  Normal   Pulling    2m58s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m57s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m57s (x5 over 5m59s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    51s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     51s (x21 over 5m59s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-269722/192.168.39.220
	Start Time:       Sat, 06 Dec 2025 09:19:41 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8jd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-sn8jd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-269722
	  Warning  Failed     5m45s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m14s (x4 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m14s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    48s (x20 over 6m)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     48s (x20 over 6m)     kubelet            Error: ImagePullBackOff
	  Normal   Pulling    33s (x6 over 6m2s)    kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z99d9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-z99d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kl75g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpn6k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.770693787s)
--- FAIL: TestAddons/parallel/CSI (373.07s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (344.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-269722 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-269722 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-269722 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (2.273µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-269722 -n addons-269722
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 logs -n 25: (1.082110187s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                     ARGS                                                                                                                                                                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-345944                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ -o=json --download-only -p download-only-802744 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                   │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-600827                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-345944                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-802744                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ --download-only -p binary-mirror-098159 --alsologtostderr --binary-mirror http://127.0.0.1:43773 --driver=kvm2  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ -p binary-mirror-098159                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ binary-mirror-098159 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ addons  │ disable dashboard -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ start   │ -p addons-269722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:12 UTC │
	│ addons  │ addons-269722 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:18 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ enable headlamp -p addons-269722 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ ip      │ addons-269722 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-269722                                                                                                                                                                                                                                                                                                                                                                                               │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	│ addons  │ addons-269722 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-269722        │ jenkins │ v1.37.0 │ 06 Dec 25 09:19 UTC │ 06 Dec 25 09:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:41.905948  388517 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:41.906056  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906068  388517 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:41.906073  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:41.906290  388517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:10:41.906764  388517 out.go:368] Setting JSON to false
	I1206 09:10:41.907751  388517 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6792,"bootTime":1765005450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:41.907809  388517 start.go:143] virtualization: kvm guest
	I1206 09:10:41.909713  388517 out.go:179] * [addons-269722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:41.911209  388517 notify.go:221] Checking for updates...
	I1206 09:10:41.911229  388517 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:10:41.912645  388517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:41.913886  388517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:10:41.915020  388517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:41.919365  388517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:10:41.920580  388517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:10:41.921823  388517 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:41.950647  388517 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 09:10:41.951784  388517 start.go:309] selected driver: kvm2
	I1206 09:10:41.951797  388517 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:10:41.951808  388517 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:10:41.952432  388517 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:10:41.952640  388517 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:10:41.952666  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:10:41.952706  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:10:41.952714  388517 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:10:41.952753  388517 start.go:353] cluster config:
	{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:41.952877  388517 iso.go:125] acquiring lock: {Name:mk1a7d442a240aa1785a2e6e751e007c5a8723f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:41.954741  388517 out.go:179] * Starting "addons-269722" primary control-plane node in "addons-269722" cluster
	I1206 09:10:41.955614  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:10:41.955638  388517 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1206 09:10:41.955646  388517 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:41.955737  388517 preload.go:238] Found /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1206 09:10:41.955748  388517 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1206 09:10:41.956043  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:10:41.956066  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json: {Name:mka83bdbdc23544e613eb52d015ad5fe63a1e910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:10:41.956183  388517 start.go:360] acquireMachinesLock for addons-269722: {Name:mkc77d1cf752e1546ce7850a29dbe975ae7fa9b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:10:41.956225  388517 start.go:364] duration metric: took 30.995µs to acquireMachinesLock for "addons-269722"
	I1206 09:10:41.956247  388517 start.go:93] Provisioning new machine with config: &{Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:10:41.956289  388517 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 09:10:41.957646  388517 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1206 09:10:41.957797  388517 start.go:159] libmachine.API.Create for "addons-269722" (driver="kvm2")
	I1206 09:10:41.957831  388517 client.go:173] LocalClient.Create starting
	I1206 09:10:41.957926  388517 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem
	I1206 09:10:41.993468  388517 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem
	I1206 09:10:42.078767  388517 main.go:143] libmachine: creating domain...
	I1206 09:10:42.078784  388517 main.go:143] libmachine: creating network...
	I1206 09:10:42.080023  388517 main.go:143] libmachine: found existing default network
	I1206 09:10:42.080210  388517 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.080787  388517 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d56770}
	I1206 09:10:42.080910  388517 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-269722</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.086592  388517 main.go:143] libmachine: creating private network mk-addons-269722 192.168.39.0/24...
	I1206 09:10:42.152917  388517 main.go:143] libmachine: private network mk-addons-269722 192.168.39.0/24 created
	I1206 09:10:42.153176  388517 main.go:143] libmachine: <network>
	  <name>mk-addons-269722</name>
	  <uuid>2336c74c-93b2-42b0-890b-3a8a8a25a922</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:fd:c9:1f'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:10:42.153203  388517 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.153230  388517 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 09:10:42.153244  388517 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.153313  388517 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-383742/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 09:10:42.415061  388517 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa...
	I1206 09:10:42.429309  388517 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk...
	I1206 09:10:42.429369  388517 main.go:143] libmachine: Writing magic tar header
	I1206 09:10:42.429404  388517 main.go:143] libmachine: Writing SSH key tar header
	I1206 09:10:42.429498  388517 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 ...
	I1206 09:10:42.429571  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722
	I1206 09:10:42.429604  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722 (perms=drwx------)
	I1206 09:10:42.429623  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube/machines
	I1206 09:10:42.429636  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube/machines (perms=drwxr-xr-x)
	I1206 09:10:42.429647  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:42.429656  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742/.minikube (perms=drwxr-xr-x)
	I1206 09:10:42.429674  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-383742
	I1206 09:10:42.429704  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-383742 (perms=drwxrwxr-x)
	I1206 09:10:42.429722  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 09:10:42.429744  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 09:10:42.429758  388517 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 09:10:42.429765  388517 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 09:10:42.429775  388517 main.go:143] libmachine: checking permissions on dir: /home
	I1206 09:10:42.429781  388517 main.go:143] libmachine: skipping /home - not owner
	I1206 09:10:42.429788  388517 main.go:143] libmachine: defining domain...
	I1206 09:10:42.431063  388517 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:42.438342  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:8d:9c:cf in network default
	I1206 09:10:42.438932  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:42.438948  388517 main.go:143] libmachine: starting domain...
	I1206 09:10:42.438952  388517 main.go:143] libmachine: ensuring networks are active...
	I1206 09:10:42.439580  388517 main.go:143] libmachine: Ensuring network default is active
	I1206 09:10:42.439915  388517 main.go:143] libmachine: Ensuring network mk-addons-269722 is active
	I1206 09:10:42.440425  388517 main.go:143] libmachine: getting domain XML...
	I1206 09:10:42.441355  388517 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-269722</name>
	  <uuid>faaa974f-af9d-46f8-a3b5-02afcdf78e43</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/addons-269722.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f2:80:b2'/>
	      <source network='mk-addons-269722'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:8d:9c:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:10:43.781082  388517 main.go:143] libmachine: waiting for domain to start...
	I1206 09:10:43.782318  388517 main.go:143] libmachine: domain is now running
	I1206 09:10:43.782338  388517 main.go:143] libmachine: waiting for IP...
	I1206 09:10:43.783021  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:43.783369  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:43.783385  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:43.783643  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:43.783696  388517 retry.go:31] will retry after 278.987444ms: waiting for domain to come up
	I1206 09:10:44.064124  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.064595  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.064606  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.064919  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.064957  388517 retry.go:31] will retry after 330.689041ms: waiting for domain to come up
	I1206 09:10:44.397460  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.397947  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.397962  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.398238  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.398277  388517 retry.go:31] will retry after 413.406233ms: waiting for domain to come up
	I1206 09:10:44.812999  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:44.813581  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:44.813601  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:44.813924  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:44.813970  388517 retry.go:31] will retry after 440.754763ms: waiting for domain to come up
	I1206 09:10:45.256730  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.257210  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.257228  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.257514  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.257556  388517 retry.go:31] will retry after 717.110818ms: waiting for domain to come up
	I1206 09:10:45.975902  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:45.976408  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:45.976424  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:45.976689  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:45.976722  388517 retry.go:31] will retry after 589.246662ms: waiting for domain to come up
	I1206 09:10:46.567419  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:46.567953  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:46.567973  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:46.568280  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:46.568326  388517 retry.go:31] will retry after 857.836192ms: waiting for domain to come up
	I1206 09:10:47.427627  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:47.428082  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:47.428097  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:47.428421  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:47.428475  388517 retry.go:31] will retry after 969.137484ms: waiting for domain to come up
	I1206 09:10:48.399647  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:48.400199  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:48.400215  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:48.400562  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:48.400615  388517 retry.go:31] will retry after 1.740343977s: waiting for domain to come up
	I1206 09:10:50.143512  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:50.143999  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:50.144014  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:50.144329  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:50.144363  388517 retry.go:31] will retry after 2.180103707s: waiting for domain to come up
	I1206 09:10:52.325956  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:52.326470  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:52.326485  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:52.326823  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:52.326870  388517 retry.go:31] will retry after 2.821995124s: waiting for domain to come up
	I1206 09:10:55.151850  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:55.152380  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:55.152397  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:55.152818  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:55.152881  388517 retry.go:31] will retry after 2.278330426s: waiting for domain to come up
	I1206 09:10:57.432300  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:10:57.432813  388517 main.go:143] libmachine: no network interface addresses found for domain addons-269722 (source=lease)
	I1206 09:10:57.432829  388517 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:10:57.433107  388517 main.go:143] libmachine: unable to find current IP address of domain addons-269722 in network mk-addons-269722 (interfaces detected: [])
	I1206 09:10:57.433144  388517 retry.go:31] will retry after 3.558016636s: waiting for domain to come up
	I1206 09:11:00.994805  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995368  388517 main.go:143] libmachine: domain addons-269722 has current primary IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:00.995386  388517 main.go:143] libmachine: found domain IP: 192.168.39.220
	I1206 09:11:00.995394  388517 main.go:143] libmachine: reserving static IP address...
	I1206 09:11:00.995774  388517 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-269722", mac: "52:54:00:f2:80:b2", ip: "192.168.39.220"} in network mk-addons-269722
	I1206 09:11:01.169742  388517 main.go:143] libmachine: reserved static IP address 192.168.39.220 for domain addons-269722
	I1206 09:11:01.169781  388517 main.go:143] libmachine: waiting for SSH...
	I1206 09:11:01.169788  388517 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 09:11:01.172807  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173481  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.173514  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.173694  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.173964  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.173979  388517 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 09:11:01.272210  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.272513  388517 main.go:143] libmachine: domain creation complete
	I1206 09:11:01.273828  388517 machine.go:94] provisionDockerMachine start ...
	I1206 09:11:01.275801  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276155  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.276181  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.276321  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.276511  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.276520  388517 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:11:01.373100  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 09:11:01.373130  388517 buildroot.go:166] provisioning hostname "addons-269722"
	I1206 09:11:01.375944  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376345  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.376372  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.376608  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.376841  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.376854  388517 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-269722 && echo "addons-269722" | sudo tee /etc/hostname
	I1206 09:11:01.490874  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-269722
	
	I1206 09:11:01.493600  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.493995  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.494015  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.494204  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:01.494457  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:01.494481  388517 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-269722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-269722/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-269722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:11:01.601899  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:11:01.601925  388517 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-383742/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-383742/.minikube}
	I1206 09:11:01.601941  388517 buildroot.go:174] setting up certificates
	I1206 09:11:01.601950  388517 provision.go:84] configureAuth start
	I1206 09:11:01.604648  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.605083  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.605108  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607340  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607665  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.607684  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.607799  388517 provision.go:143] copyHostCerts
	I1206 09:11:01.607857  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/ca.pem (1082 bytes)
	I1206 09:11:01.608028  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/cert.pem (1123 bytes)
	I1206 09:11:01.608130  388517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-383742/.minikube/key.pem (1675 bytes)
	I1206 09:11:01.608197  388517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem org=jenkins.addons-269722 san=[127.0.0.1 192.168.39.220 addons-269722 localhost minikube]
	I1206 09:11:01.761887  388517 provision.go:177] copyRemoteCerts
	I1206 09:11:01.761947  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:11:01.764212  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764543  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.764581  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.764716  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:01.844794  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:11:01.873452  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:11:01.901904  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:11:01.930285  388517 provision.go:87] duration metric: took 328.321351ms to configureAuth
	I1206 09:11:01.930311  388517 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:11:01.930501  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:01.930521  388517 machine.go:97] duration metric: took 656.676665ms to provisionDockerMachine
	I1206 09:11:01.930531  388517 client.go:176] duration metric: took 19.972691553s to LocalClient.Create
	I1206 09:11:01.930551  388517 start.go:167] duration metric: took 19.97275355s to libmachine.API.Create "addons-269722"
	I1206 09:11:01.930596  388517 start.go:293] postStartSetup for "addons-269722" (driver="kvm2")
	I1206 09:11:01.930611  388517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:11:01.930658  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:11:01.933229  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933604  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:01.933625  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:01.933768  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.013069  388517 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:11:02.017563  388517 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:11:02.017583  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/addons for local assets ...
	I1206 09:11:02.017651  388517 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-383742/.minikube/files for local assets ...
	I1206 09:11:02.017684  388517 start.go:296] duration metric: took 87.076069ms for postStartSetup
	I1206 09:11:02.020584  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.020944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.020967  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.021198  388517 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/config.json ...
	I1206 09:11:02.021364  388517 start.go:128] duration metric: took 20.065065791s to createHost
	I1206 09:11:02.023485  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023794  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.023813  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.023959  388517 main.go:143] libmachine: Using SSH client type: native
	I1206 09:11:02.024173  388517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1206 09:11:02.024185  388517 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:11:02.121919  388517 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765012262.085933657
	
	I1206 09:11:02.121936  388517 fix.go:216] guest clock: 1765012262.085933657
	I1206 09:11:02.121942  388517 fix.go:229] Guest: 2025-12-06 09:11:02.085933657 +0000 UTC Remote: 2025-12-06 09:11:02.021381724 +0000 UTC m=+20.161953678 (delta=64.551933ms)
	I1206 09:11:02.121960  388517 fix.go:200] guest clock delta is within tolerance: 64.551933ms
	I1206 09:11:02.121974  388517 start.go:83] releasing machines lock for "addons-269722", held for 20.165731842s
	I1206 09:11:02.124594  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.124944  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.124973  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.125474  388517 ssh_runner.go:195] Run: cat /version.json
	I1206 09:11:02.125592  388517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:11:02.128433  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128746  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.128763  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.128921  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.128989  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129445  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:02.129480  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:02.129624  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:02.204247  388517 ssh_runner.go:195] Run: systemctl --version
	I1206 09:11:02.228305  388517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:11:02.234563  388517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:11:02.234633  388517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:11:02.260428  388517 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:11:02.260454  388517 start.go:496] detecting cgroup driver to use...
	I1206 09:11:02.260528  388517 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1206 09:11:02.297166  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:11:02.315488  388517 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:11:02.315555  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:11:02.332111  388517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:11:02.347076  388517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:11:02.491701  388517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:11:02.703514  388517 docker.go:234] disabling docker service ...
	I1206 09:11:02.703604  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:11:02.719452  388517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:11:02.733466  388517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:11:02.882667  388517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:11:03.020738  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:11:03.036166  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:11:03.057682  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:11:03.069874  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:11:03.081945  388517 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1206 09:11:03.082022  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1206 09:11:03.094105  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.106250  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:11:03.117968  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:11:03.130001  388517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:11:03.142658  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:11:03.154729  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:11:03.166983  388517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:11:03.178658  388517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:11:03.188759  388517 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 09:11:03.188803  388517 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 09:11:03.211314  388517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:11:03.224103  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:03.361032  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:03.404281  388517 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1206 09:11:03.404385  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:03.409523  388517 retry.go:31] will retry after 1.49666292s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1206 09:11:04.906469  388517 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1206 09:11:04.912677  388517 start.go:564] Will wait 60s for crictl version
	I1206 09:11:04.912759  388517 ssh_runner.go:195] Run: which crictl
	I1206 09:11:04.916909  388517 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:11:04.952021  388517 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1206 09:11:04.952114  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:04.979176  388517 ssh_runner.go:195] Run: containerd --version
	I1206 09:11:05.046042  388517 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 1.7.23 ...
	I1206 09:11:05.113332  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113713  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:05.113733  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:05.113904  388517 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:11:05.118728  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:05.134279  388517 kubeadm.go:884] updating cluster {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:11:05.134389  388517 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1206 09:11:05.134436  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:05.163245  388517 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 09:11:05.163338  388517 ssh_runner.go:195] Run: which lz4
	I1206 09:11:05.167791  388517 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 09:11:05.172645  388517 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 09:11:05.172675  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (339763354 bytes)
	I1206 09:11:06.408453  388517 containerd.go:563] duration metric: took 1.240701247s to copy over tarball
	I1206 09:11:06.408534  388517 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 09:11:07.824785  388517 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.41620911s)
	I1206 09:11:07.824829  388517 containerd.go:570] duration metric: took 1.416348198s to extract the tarball
	I1206 09:11:07.824837  388517 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 09:11:07.876750  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.019449  388517 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:11:08.055912  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.089979  388517 retry.go:31] will retry after 204.800226ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T09:11:08Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1206 09:11:08.295519  388517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:11:08.332986  388517 containerd.go:627] all images are preloaded for containerd runtime.
	I1206 09:11:08.333019  388517 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:11:08.333035  388517 kubeadm.go:935] updating node { 192.168.39.220 8443 v1.34.2 containerd true true} ...
	I1206 09:11:08.333199  388517 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-269722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:11:08.333263  388517 ssh_runner.go:195] Run: sudo crictl info
	I1206 09:11:08.363626  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:08.363652  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:08.363671  388517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:11:08.363694  388517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-269722 NodeName:addons-269722 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:11:08.363802  388517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-269722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:11:08.363898  388517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:11:08.376320  388517 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:11:08.376400  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:11:08.387974  388517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1206 09:11:08.408073  388517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:11:08.428105  388517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1206 09:11:08.448237  388517 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I1206 09:11:08.452207  388517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:11:08.466654  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:08.612134  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:08.650190  388517 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722 for IP: 192.168.39.220
	I1206 09:11:08.650221  388517 certs.go:195] generating shared ca certs ...
	I1206 09:11:08.650248  388517 certs.go:227] acquiring lock for ca certs: {Name:mkf308ce4033be42aa40d533f6774edcee747959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.650426  388517 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key
	I1206 09:11:08.753472  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt ...
	I1206 09:11:08.753502  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt: {Name:mk0bc547e2c4a3698a714e2e67e37fe0843ac532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753663  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key ...
	I1206 09:11:08.753675  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key: {Name:mk257636778cdf81faeb62cfd641c994d65ea561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.753763  388517 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key
	I1206 09:11:08.944161  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt ...
	I1206 09:11:08.944193  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt: {Name:mk7a27f62c25f1293f691b851f1b366a8491b851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944357  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key ...
	I1206 09:11:08.944369  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key: {Name:mk0dbe369ea38e824cffd9d96349344507b04d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:08.944442  388517 certs.go:257] generating profile certs ...
	I1206 09:11:08.944507  388517 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key
	I1206 09:11:08.944522  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt with IP's: []
	I1206 09:11:09.004417  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt ...
	I1206 09:11:09.004443  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: {Name:mkc7ee580529997a0158c489e5de6aaaab4381ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004577  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key ...
	I1206 09:11:09.004587  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.key: {Name:mk6aea14e5a790daaff4a5aa584541cbd36fa7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.004653  388517 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9
	I1206 09:11:09.004671  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I1206 09:11:09.103453  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 ...
	I1206 09:11:09.103485  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9: {Name:mkb69edd53ea15cc714b2e6dcd35fb9bda8e0a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103642  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 ...
	I1206 09:11:09.103658  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9: {Name:mkbef642e3d05cf341f2d82d3597bab753cd2174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.103728  388517 certs.go:382] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt
	I1206 09:11:09.103816  388517 certs.go:386] copying /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key.64c18cd9 -> /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key
	I1206 09:11:09.103876  388517 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key
	I1206 09:11:09.103896  388517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt with IP's: []
	I1206 09:11:09.195473  388517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt ...
	I1206 09:11:09.195504  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt: {Name:mk1ed5a652995aaac584bd788ffca22c7d7d4179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195645  388517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key ...
	I1206 09:11:09.195657  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key: {Name:mkb0905602ecfb2d53502a566a95204a8f98bd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:09.195846  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 09:11:09.195899  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:11:09.195942  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:11:09.195967  388517 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-383742/.minikube/certs/key.pem (1675 bytes)
	I1206 09:11:09.196610  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:11:09.227924  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:11:09.257244  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:11:09.287169  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:11:09.319682  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:11:09.354785  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:11:09.391203  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:11:09.419761  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:11:09.448250  388517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:11:09.476343  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:11:09.495953  388517 ssh_runner.go:195] Run: openssl version
	I1206 09:11:09.502134  388517 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.512996  388517 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:11:09.524111  388517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529273  388517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:11 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.529325  388517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:11:09.536780  388517 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:11:09.547642  388517 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:11:09.558961  388517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:11:09.563664  388517 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:11:09.563723  388517 kubeadm.go:401] StartCluster: {Name:addons-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:09.563812  388517 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1206 09:11:09.563854  388517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:11:09.597231  388517 cri.go:89] found id: ""
	I1206 09:11:09.597295  388517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:11:09.609197  388517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:11:09.619916  388517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:11:09.631012  388517 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:11:09.631028  388517 kubeadm.go:158] found existing configuration files:
	
	I1206 09:11:09.631067  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:11:09.641398  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:11:09.641442  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:11:09.652328  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:11:09.662630  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:11:09.662683  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:11:09.673582  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.683944  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:11:09.683997  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:11:09.694924  388517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:11:09.705284  388517 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:11:09.705332  388517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:11:09.716270  388517 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 09:11:09.765023  388517 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:11:09.765245  388517 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:11:09.858054  388517 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:11:09.858229  388517 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:11:09.858396  388517 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:11:09.865139  388517 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:11:09.920280  388517 out.go:252]   - Generating certificates and keys ...
	I1206 09:11:09.920378  388517 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:11:09.920462  388517 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:11:10.105985  388517 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:11:10.865814  388517 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:11:10.897033  388517 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:11:11.249180  388517 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:11:11.405265  388517 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:11:11.405459  388517 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.595783  388517 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:11:11.595930  388517 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-269722 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1206 09:11:11.685113  388517 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:11:11.795320  388517 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:11:12.056322  388517 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:11:12.057602  388517 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:11:12.245522  388517 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:11:12.344100  388517 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:11:12.481696  388517 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:11:12.805057  388517 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:11:12.987909  388517 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:11:12.988354  388517 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:11:12.990637  388517 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:11:12.992591  388517 out.go:252]   - Booting up control plane ...
	I1206 09:11:12.992683  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:11:12.992757  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:11:12.992829  388517 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:11:13.009376  388517 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:11:13.009528  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:11:13.016083  388517 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:11:13.016157  388517 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:11:13.016213  388517 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:11:13.195314  388517 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:11:13.195457  388517 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:11:13.696155  388517 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.400144ms
	I1206 09:11:13.701317  388517 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:11:13.701412  388517 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.220:8443/livez
	I1206 09:11:13.701516  388517 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:11:13.701609  388517 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:11:15.925448  388517 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.2258309s
	I1206 09:11:17.097937  388517 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.399298925s
	I1206 09:11:19.199961  388517 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502821586s
	I1206 09:11:19.217728  388517 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:11:19.231172  388517 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:11:19.244842  388517 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:11:19.245047  388517 kubeadm.go:319] [mark-control-plane] Marking the node addons-269722 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:11:19.255597  388517 kubeadm.go:319] [bootstrap-token] Using token: tnc6di.0o5js773tkjcekar
	I1206 09:11:19.256827  388517 out.go:252]   - Configuring RBAC rules ...
	I1206 09:11:19.256963  388517 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:11:19.261388  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:11:19.269766  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:11:19.273599  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:11:19.281952  388517 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:11:19.288853  388517 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:11:19.605592  388517 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:11:20.070227  388517 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:11:20.605934  388517 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:11:20.606844  388517 kubeadm.go:319] 
	I1206 09:11:20.606929  388517 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:11:20.606938  388517 kubeadm.go:319] 
	I1206 09:11:20.607026  388517 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:11:20.607033  388517 kubeadm.go:319] 
	I1206 09:11:20.607064  388517 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:11:20.607146  388517 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:11:20.607224  388517 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:11:20.607234  388517 kubeadm.go:319] 
	I1206 09:11:20.607327  388517 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:11:20.607350  388517 kubeadm.go:319] 
	I1206 09:11:20.607426  388517 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:11:20.607434  388517 kubeadm.go:319] 
	I1206 09:11:20.607510  388517 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:11:20.607639  388517 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:11:20.607758  388517 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:11:20.607774  388517 kubeadm.go:319] 
	I1206 09:11:20.607894  388517 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:11:20.607992  388517 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:11:20.608007  388517 kubeadm.go:319] 
	I1206 09:11:20.608129  388517 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608283  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 \
	I1206 09:11:20.608307  388517 kubeadm.go:319] 	--control-plane 
	I1206 09:11:20.608316  388517 kubeadm.go:319] 
	I1206 09:11:20.608391  388517 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:11:20.608397  388517 kubeadm.go:319] 
	I1206 09:11:20.608494  388517 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tnc6di.0o5js773tkjcekar \
	I1206 09:11:20.608638  388517 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04fdba1f0cc9e5b6ff9fb0c67883e9efc1b2d27a26263d71016b7c2692858db2 
	I1206 09:11:20.609835  388517 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:11:20.609893  388517 cni.go:84] Creating CNI manager for ""
	I1206 09:11:20.609910  388517 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:11:20.611407  388517 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:11:20.612520  388517 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:11:20.630100  388517 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:11:20.652382  388517 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:11:20.652515  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:20.652537  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-269722 minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-269722 minikube.k8s.io/primary=true
	I1206 09:11:20.694430  388517 ops.go:34] apiserver oom_adj: -16
	I1206 09:11:20.784013  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.284280  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:21.784935  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.284329  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:22.784096  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.284134  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:23.784412  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.285006  388517 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:11:24.365500  388517 kubeadm.go:1114] duration metric: took 3.713041621s to wait for elevateKubeSystemPrivileges
	I1206 09:11:24.365554  388517 kubeadm.go:403] duration metric: took 14.801837471s to StartCluster
	I1206 09:11:24.365583  388517 settings.go:142] acquiring lock: {Name:mk5046213dcb1abe0d7fe7b15722aa4884a98be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.365735  388517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:11:24.366166  388517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-383742/kubeconfig: {Name:mka1b03c13e1e115a4ba1af8cb483b83d246825c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:11:24.366385  388517 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1206 09:11:24.366393  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:11:24.366467  388517 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:11:24.366579  388517 addons.go:70] Setting yakd=true in profile "addons-269722"
	I1206 09:11:24.366593  388517 addons.go:70] Setting inspektor-gadget=true in profile "addons-269722"
	I1206 09:11:24.366594  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366606  388517 addons.go:239] Setting addon yakd=true in "addons-269722"
	I1206 09:11:24.366612  388517 addons.go:239] Setting addon inspektor-gadget=true in "addons-269722"
	I1206 09:11:24.366637  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366644  388517 addons.go:70] Setting default-storageclass=true in profile "addons-269722"
	I1206 09:11:24.366651  388517 addons.go:70] Setting gcp-auth=true in profile "addons-269722"
	I1206 09:11:24.366663  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-269722"
	I1206 09:11:24.366682  388517 mustload.go:66] Loading cluster: addons-269722
	I1206 09:11:24.366726  388517 addons.go:70] Setting registry-creds=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.366753  388517 addons.go:70] Setting cloud-spanner=true in profile "addons-269722"
	I1206 09:11:24.366778  388517 addons.go:239] Setting addon registry-creds=true in "addons-269722"
	I1206 09:11:24.366781  388517 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-269722"
	I1206 09:11:24.366784  388517 addons.go:239] Setting addon cloud-spanner=true in "addons-269722"
	I1206 09:11:24.366787  388517 addons.go:70] Setting storage-provisioner=true in profile "addons-269722"
	I1206 09:11:24.366800  388517 addons.go:239] Setting addon storage-provisioner=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366818  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366819  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366821  388517 addons.go:70] Setting metrics-server=true in profile "addons-269722"
	I1206 09:11:24.366836  388517 addons.go:239] Setting addon metrics-server=true in "addons-269722"
	I1206 09:11:24.366850  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.366901  388517 config.go:182] Loaded profile config "addons-269722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:11:24.366979  388517 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-269722"
	I1206 09:11:24.367005  388517 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-269722"
	I1206 09:11:24.367028  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367504  388517 addons.go:70] Setting registry=true in profile "addons-269722"
	I1206 09:11:24.367531  388517 addons.go:239] Setting addon registry=true in "addons-269722"
	I1206 09:11:24.367561  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367879  388517 addons.go:70] Setting ingress=true in profile "addons-269722"
	I1206 09:11:24.367904  388517 addons.go:239] Setting addon ingress=true in "addons-269722"
	I1206 09:11:24.366811  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367940  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.367975  388517 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-269722"
	I1206 09:11:24.367998  388517 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-269722"
	I1206 09:11:24.368012  388517 addons.go:70] Setting volcano=true in profile "addons-269722"
	I1206 09:11:24.368028  388517 addons.go:239] Setting addon volcano=true in "addons-269722"
	I1206 09:11:24.368051  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368065  388517 addons.go:70] Setting volumesnapshots=true in profile "addons-269722"
	I1206 09:11:24.368083  388517 addons.go:239] Setting addon volumesnapshots=true in "addons-269722"
	I1206 09:11:24.368108  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368318  388517 addons.go:70] Setting ingress-dns=true in profile "addons-269722"
	I1206 09:11:24.368334  388517 addons.go:239] Setting addon ingress-dns=true in "addons-269722"
	I1206 09:11:24.368504  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368582  388517 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-269722"
	I1206 09:11:24.368650  388517 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:24.368672  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.368873  388517 out.go:179] * Verifying Kubernetes components...
	I1206 09:11:24.366646  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.370225  388517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:11:24.371769  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.373754  388517 addons.go:239] Setting addon default-storageclass=true in "addons-269722"
	I1206 09:11:24.373789  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.374301  388517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:11:24.374379  388517 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:11:24.375268  388517 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:11:24.375275  388517 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:11:24.375328  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:24.375343  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:11:24.376013  388517 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:11:24.376046  388517 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:24.376074  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:11:24.376035  388517 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:11:24.376134  388517 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-269722"
	I1206 09:11:24.376581  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:24.376790  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:11:24.376809  388517 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:11:24.376827  388517 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.376841  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:11:24.376847  388517 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:11:24.377596  388517 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:24.377612  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:11:24.378229  388517 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:11:24.378237  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:11:24.378252  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:11:24.378268  388517 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:11:24.378231  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.378298  388517 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1206 09:11:24.378904  388517 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:11:24.378904  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:11:24.378253  388517 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:11:24.379492  388517 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.379507  388517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:11:24.379650  388517 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:11:24.379665  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:11:24.379672  388517 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:24.379683  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:11:24.380334  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:24.380373  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:11:24.380344  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:11:24.380559  388517 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:11:24.380561  388517 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:11:24.381552  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:11:24.381577  388517 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1206 09:11:24.382302  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:24.382322  388517 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:24.382342  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:11:24.384119  388517 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1206 09:11:24.384134  388517 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:11:24.384092  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:11:24.385853  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.386682  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:11:24.386986  388517 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:24.387009  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:11:24.387404  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.387763  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.387799  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388004  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388093  388517 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:11:24.388126  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.388701  388517 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:24.388724  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1206 09:11:24.389099  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.389150  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.389220  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:11:24.389288  388517 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:24.389303  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:11:24.389924  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.389981  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390249  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390264  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390288  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390293  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.390722  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.390908  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.390941  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391134  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.391542  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:11:24.391835  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.392214  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.392478  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.393141  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394085  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394128  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394319  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394473  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394510  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.394522  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394539  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394585  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:11:24.394628  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.394751  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.395613  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396316  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.396359  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.396795  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.396833  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397321  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397322  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397417  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397434  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.397472  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397481  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397505  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.397761  388517 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:11:24.397813  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.397879  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398225  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.398815  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:11:24.398876  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:11:24.398990  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399146  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399416  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399466  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399501  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399518  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.399553  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.399720  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.399930  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.400166  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.400198  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.400399  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:24.401986  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402373  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:24.402406  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:24.402558  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	W1206 09:11:24.544745  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544776  388517 retry.go:31] will retry after 167.524935ms: ssh: handshake failed: read tcp 192.168.39.1:34226->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.544834  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.544842  388517 retry.go:31] will retry after 337.340492ms: ssh: handshake failed: read tcp 192.168.39.1:34242->192.168.39.220:22: read: connection reset by peer
	W1206 09:11:24.586807  388517 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.586836  388517 retry.go:31] will retry after 361.026308ms: ssh: handshake failed: read tcp 192.168.39.1:34260->192.168.39.220:22: read: connection reset by peer
	I1206 09:11:24.720251  388517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:11:24.720260  388517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:11:24.915042  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:11:24.943642  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:11:24.946926  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:11:25.098136  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:11:25.119770  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:11:25.119795  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:11:25.208175  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:11:25.224407  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:11:25.224432  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:11:25.225309  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:11:25.232666  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:11:25.232682  388517 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:11:25.246755  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:11:25.246777  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:11:25.247663  388517 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:11:25.247683  388517 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:11:25.270838  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:11:25.331361  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:11:25.449965  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:11:25.469046  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:11:25.613424  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:11:25.613456  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:11:25.633923  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:11:25.633954  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:11:25.657079  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:11:25.657110  388517 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:11:25.695667  388517 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:25.695693  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:11:25.696553  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:11:25.756474  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:11:25.756502  388517 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:11:26.160704  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:11:26.160736  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:11:26.284633  388517 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:11:26.284662  388517 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:11:26.286985  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:11:26.434395  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:11:26.434422  388517 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:11:26.465197  388517 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.465225  388517 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:11:26.661217  388517 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.661249  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:11:26.705778  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:11:26.774501  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:11:26.774527  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:11:26.849719  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:11:26.906080  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:11:26.906136  388517 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:11:27.000268  388517 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:11:27.000294  388517 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:11:27.610778  388517 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:27.610815  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:11:27.800583  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:11:27.800607  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:11:27.882544  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:28.272413  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:11:28.272451  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:11:28.298383  388517 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.578087161s)
	I1206 09:11:28.298435  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.38335524s)
	I1206 09:11:28.298380  388517 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.578018491s)
	I1206 09:11:28.298514  388517 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 09:11:28.298551  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.354877639s)
	I1206 09:11:28.298640  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.351685893s)
	I1206 09:11:28.299174  388517 node_ready.go:35] waiting up to 6m0s for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373103  388517 node_ready.go:49] node "addons-269722" is "Ready"
	I1206 09:11:28.373131  388517 node_ready.go:38] duration metric: took 73.939285ms for node "addons-269722" to be "Ready" ...
	I1206 09:11:28.373146  388517 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:11:28.373191  388517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:11:28.564603  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:11:28.564627  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:11:28.805525  388517 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-269722" context rescaled to 1 replicas
	I1206 09:11:28.892887  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:11:28.892912  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:11:29.154236  388517 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:29.154271  388517 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:11:29.383179  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:11:31.838578  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.740399341s)
	I1206 09:11:31.842964  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:11:31.846059  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846625  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:31.846661  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:31.846877  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:32.206384  388517 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:11:32.398884  388517 addons.go:239] Setting addon gcp-auth=true in "addons-269722"
	I1206 09:11:32.398959  388517 host.go:66] Checking if "addons-269722" exists ...
	I1206 09:11:32.401192  388517 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:11:32.404036  388517 main.go:143] libmachine: domain addons-269722 has defined MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404508  388517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:80:b2", ip: ""} in network mk-addons-269722: {Iface:virbr1 ExpiryTime:2025-12-06 10:10:57 +0000 UTC Type:0 Mac:52:54:00:f2:80:b2 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-269722 Clientid:01:52:54:00:f2:80:b2}
	I1206 09:11:32.404543  388517 main.go:143] libmachine: domain addons-269722 has defined IP address 192.168.39.220 and MAC address 52:54:00:f2:80:b2 in network mk-addons-269722
	I1206 09:11:32.404739  388517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/addons-269722/id_rsa Username:docker}
	I1206 09:11:33.380508  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.172285689s)
	I1206 09:11:33.380567  388517 addons.go:495] Verifying addon ingress=true in "addons-269722"
	I1206 09:11:33.380566  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.155226513s)
	I1206 09:11:33.380618  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.109753242s)
	I1206 09:11:33.382778  388517 out.go:179] * Verifying ingress addon...
	I1206 09:11:33.384997  388517 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:11:33.394151  388517 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:11:33.394167  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:33.983745  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.442405  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:34.961428  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.544843  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:35.959086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.477596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:36.933661  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.492983  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:37.907682  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.464342  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:38.476878  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.145459322s)
	I1206 09:11:38.476953  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.026949113s)
	I1206 09:11:38.477048  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.007974684s)
	I1206 09:11:38.477116  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.780538742s)
	I1206 09:11:38.477233  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.190220804s)
	I1206 09:11:38.477253  388517 addons.go:495] Verifying addon registry=true in "addons-269722"
	I1206 09:11:38.477312  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.77149962s)
	I1206 09:11:38.477336  388517 addons.go:495] Verifying addon metrics-server=true in "addons-269722"
	I1206 09:11:38.477363  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.627610125s)
	I1206 09:11:38.477525  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.594927288s)
	I1206 09:11:38.477544  388517 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.104332654s)
	I1206 09:11:38.477571  388517 api_server.go:72] duration metric: took 14.11116064s to wait for apiserver process to appear ...
	I1206 09:11:38.477583  388517 api_server.go:88] waiting for apiserver healthz status ...
	W1206 09:11:38.477581  388517 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477604  388517 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1206 09:11:38.477604  388517 retry.go:31] will retry after 298.178363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:11:38.477795  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.094573264s)
	I1206 09:11:38.477823  388517 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-269722"
	I1206 09:11:38.477842  388517 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.076624226s)
	I1206 09:11:38.478884  388517 out.go:179] * Verifying registry addon...
	I1206 09:11:38.478890  388517 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-269722 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:11:38.479684  388517 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:11:38.479686  388517 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:11:38.481128  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:11:38.482570  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:11:38.482875  388517 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:11:38.483935  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:11:38.483956  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:11:38.542927  388517 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1206 09:11:38.560082  388517 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:11:38.560109  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:38.560250  388517 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:11:38.560266  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:38.564812  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:11:38.564836  388517 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:11:38.577730  388517 api_server.go:141] control plane version: v1.34.2
	I1206 09:11:38.577765  388517 api_server.go:131] duration metric: took 100.173477ms to wait for apiserver health ...
	I1206 09:11:38.577777  388517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:11:38.641466  388517 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.641493  388517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:11:38.668346  388517 system_pods.go:59] 20 kube-system pods found
	I1206 09:11:38.668390  388517 system_pods.go:61] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.668407  388517 system_pods.go:61] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.668417  388517 system_pods.go:61] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.668435  388517 system_pods.go:61] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.668450  388517 system_pods.go:61] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.668460  388517 system_pods.go:61] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.668469  388517 system_pods.go:61] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.668476  388517 system_pods.go:61] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.668484  388517 system_pods.go:61] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.668493  388517 system_pods.go:61] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.668501  388517 system_pods.go:61] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.668508  388517 system_pods.go:61] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.668520  388517 system_pods.go:61] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.668526  388517 system_pods.go:61] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.668535  388517 system_pods.go:61] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.668543  388517 system_pods.go:61] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.668558  388517 system_pods.go:61] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.668574  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668644  388517 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.668650  388517 system_pods.go:61] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.668660  388517 system_pods.go:74] duration metric: took 90.874732ms to wait for pod list to return data ...
	I1206 09:11:38.668672  388517 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:11:38.705679  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:11:38.776568  388517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:11:38.781850  388517 default_sa.go:45] found service account: "default"
	I1206 09:11:38.781885  388517 default_sa.go:55] duration metric: took 113.206818ms for default service account to be created ...
	I1206 09:11:38.781896  388517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:11:38.893236  388517 system_pods.go:86] 20 kube-system pods found
	I1206 09:11:38.893269  388517 system_pods.go:89] "amd-gpu-device-plugin-4x5bp" [200b561d-9b38-41b5-b7ed-1d1b8aa9c977] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:11:38.893310  388517 system_pods.go:89] "coredns-66bc5c9577-l7sr8" [863c5ad0-c918-455d-8af1-40c9e1948ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:11:38.893318  388517 system_pods.go:89] "coredns-66bc5c9577-tn6dd" [1471497e-5fa4-48d4-a3c2-4d89904ed640] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1206 09:11:38.893328  388517 system_pods.go:89] "csi-hostpath-attacher-0" [bd1f1e77-8cad-40a2-97e3-2b05daf622f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:11:38.893334  388517 system_pods.go:89] "csi-hostpath-resizer-0" [4ed9076c-603a-48cd-a0d1-189d5fd51651] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:11:38.893340  388517 system_pods.go:89] "csi-hostpathplugin-c5bss" [d0b3695c-3b42-4065-9bdf-1b2206023c5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:11:38.893344  388517 system_pods.go:89] "etcd-addons-269722" [751c8eff-2c50-4b41-9193-90db8a0636bf] Running
	I1206 09:11:38.893348  388517 system_pods.go:89] "kube-apiserver-addons-269722" [d32278cf-92c2-455c-b174-fb8a83dadda4] Running
	I1206 09:11:38.893352  388517 system_pods.go:89] "kube-controller-manager-addons-269722" [7e253ad0-19bb-4870-926b-a1569f6f1398] Running
	I1206 09:11:38.893357  388517 system_pods.go:89] "kube-ingress-dns-minikube" [be7d521f-b31b-4231-bd74-8a66d93c3fc4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:11:38.893361  388517 system_pods.go:89] "kube-proxy-c2km9" [fb4b1fd3-c1e4-4d05-b0c9-5b52f82e1849] Running
	I1206 09:11:38.893364  388517 system_pods.go:89] "kube-scheduler-addons-269722" [73132ab3-f6c2-40cb-b3ba-aee3ff21019d] Running
	I1206 09:11:38.893369  388517 system_pods.go:89] "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:11:38.893374  388517 system_pods.go:89] "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
	I1206 09:11:38.893379  388517 system_pods.go:89] "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:11:38.893383  388517 system_pods.go:89] "registry-creds-764b6fb674-hkrh8" [b7741462-59ef-4947-ac5d-b5ffab88a570] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:11:38.893389  388517 system_pods.go:89] "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:11:38.893395  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qbp6w" [0ead8e94-20c0-4dec-801d-66bd3dc39a02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893400  388517 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v9sd5" [84d9cd78-04cb-4f8d-b8e7-a694b55e490a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:11:38.893403  388517 system_pods.go:89] "storage-provisioner" [07857490-6084-4734-a54d-f7de8ca29ea5] Running
	I1206 09:11:38.893410  388517 system_pods.go:126] duration metric: took 111.509411ms to wait for k8s-apps to be running ...
	I1206 09:11:38.893420  388517 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:11:38.893463  388517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:11:39.039991  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.105053  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.105115  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.435086  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:39.577305  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:39.578361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:39.891557  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.023055  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.023335  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.299367  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.593645009s)
	I1206 09:11:40.300442  388517 addons.go:495] Verifying addon gcp-auth=true in "addons-269722"
	I1206 09:11:40.302591  388517 out.go:179] * Verifying gcp-auth addon...
	I1206 09:11:40.304667  388517 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:11:40.334052  388517 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:11:40.334086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.389629  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.490307  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:40.490431  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.813628  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:40.836756  388517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.060127251s)
	I1206 09:11:40.836796  388517 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.943309249s)
	I1206 09:11:40.836822  388517 system_svc.go:56] duration metric: took 1.943395217s WaitForService to wait for kubelet
	I1206 09:11:40.836835  388517 kubeadm.go:587] duration metric: took 16.470422509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:11:40.836870  388517 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:11:40.843939  388517 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 09:11:40.843963  388517 node_conditions.go:123] node cpu capacity is 2
	I1206 09:11:40.843980  388517 node_conditions.go:105] duration metric: took 7.101649ms to run NodePressure ...
	I1206 09:11:40.844002  388517 start.go:242] waiting for startup goroutines ...
	I1206 09:11:40.890430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:40.986853  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:40.992475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.355777  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.389062  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.487963  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.489146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:41.808891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:41.889779  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:41.985833  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:41.987429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.308166  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.409444  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.510304  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:42.511035  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.809432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:42.888458  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:42.984315  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:42.987586  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.308446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.388943  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.496391  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.496607  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:43.808230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:43.888549  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:43.984398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:43.986840  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.312899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.514152  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:44.514383  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.515204  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.811435  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:44.888384  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:44.984563  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:44.986735  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.307401  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.388721  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.486271  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.488952  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:45.808083  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:45.888466  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:45.985838  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:45.987005  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.309162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.390486  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.484411  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.486023  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:46.809473  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:46.888547  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:46.984691  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:46.987824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.308194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.388621  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.488407  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:47.488489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.808350  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:47.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:47.984429  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:47.986654  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.308303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.391026  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.664162  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.666762  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:48.808417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:48.888241  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:48.983979  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:48.986690  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.308241  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.388925  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.484568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.486742  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:49.809515  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:49.889646  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:49.987428  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:49.988527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.366787  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.389057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.486489  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.487907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:50.810176  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:50.910430  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:50.984648  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:50.992028  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.319081  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.388999  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.489012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:51.492499  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.808942  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:51.896270  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:51.990446  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:51.992371  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.309057  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.389352  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.484414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:52.486682  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.809190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:52.888338  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:52.991907  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:52.992417  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.307785  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.390249  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.484717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.486614  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:53.810677  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:53.889084  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:53.987650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:53.990484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.315414  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.395125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.494235  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:54.494236  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.824289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:54.888711  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:54.984659  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:54.987146  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.308481  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.390618  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.484329  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.485893  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:55.809298  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:55.895192  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:55.989404  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:55.993237  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.311289  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.389393  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.487349  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:56.487525  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.808606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:56.889213  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:56.985510  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:56.991535  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.308723  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.388636  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.488790  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:11:57.490213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:57.809073  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:57.887830  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:57.984304  388517 kapi.go:107] duration metric: took 19.503171238s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:11:57.987671  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.309052  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.389257  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:58.490899  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:58.809457  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:58.890577  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.025290  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.309296  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.392111  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.492783  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:11:59.807475  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:11:59.892512  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:11:59.986432  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.357752  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.391649  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.485367  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:00.809392  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:00.887883  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:00.986127  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.312877  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.413507  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.486873  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:01.809042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:01.889057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:01.986042  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.311892  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.390027  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.491375  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:02.923841  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:02.927183  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:02.986095  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.309017  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.390050  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.486194  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:03.812456  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:03.892317  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:03.986695  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.308544  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.389102  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.486496  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:04.810301  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:04.888379  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:04.986924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.308837  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.390825  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.485772  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:05.807540  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:05.888733  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:05.985799  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.310889  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.389329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.492425  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:06.808561  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:06.888635  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:06.985484  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.309758  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.390275  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.486771  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:07.807681  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:07.888485  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:07.987584  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.309272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.388617  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.487646  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:08.809312  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:08.888519  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:08.988459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.309597  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.411374  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:09.487378  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:09.812712  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:09.912033  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.012090  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.308609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.389736  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.488553  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:10.808609  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:10.893781  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:10.986159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.669172  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.670324  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.671190  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:11.811594  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:11.892535  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:11.985928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.310097  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.390596  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.489116  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:12.809321  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:12.890619  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:12.987653  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.309120  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.388316  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.488650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:13.808316  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:13.889333  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:13.986213  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.308276  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.388283  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.487207  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:14.808143  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:14.888955  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:14.986279  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.309037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.388329  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.488214  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:15.810501  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:15.896511  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:15.986845  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.307928  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.390728  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.485976  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:16.816944  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:16.970568  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:16.988372  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.312911  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.390564  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.486836  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:17.811792  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:17.891576  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:17.988049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.309919  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.388844  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.486086  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:18.809596  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:18.890914  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:18.986230  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.310480  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.410702  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.486633  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:19.807918  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:19.888811  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:19.987072  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.309606  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.412057  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.512925  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:20.817199  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:20.949254  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:20.990626  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.312159  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.389204  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.488639  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:21.810891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:21.888759  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:21.988415  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.309245  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.391268  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.486340  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:22.808382  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:22.889770  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:22.988997  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.309823  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.388910  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.489579  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:23.810562  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:23.889125  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:23.986750  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.308898  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.389306  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.486339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:24.809381  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:24.888322  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:24.987056  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.309252  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.388372  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.486924  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:25.810099  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:25.891569  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:25.993945  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.314253  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.503975  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:26.504104  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.811809  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:26.889063  388517 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:12:26.990570  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.308661  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.388783  388517 kapi.go:107] duration metric: took 54.003785227s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:12:27.539433  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:27.808824  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:27.987339  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.311281  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.487383  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:28.810397  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:28.990303  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.309345  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.488470  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:29.811844  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:29.987408  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.311108  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.487049  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:30.807650  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:30.986406  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.309915  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.486400  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:31.814032  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:31.989103  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.311817  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.486527  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:32.808601  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:32.989352  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.309084  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.486427  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:33.809272  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:33.986717  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:12:34.308891  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:34.486989  388517 kapi.go:107] duration metric: took 56.004420234s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:12:34.808808  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.310012  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:35.808588  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.309169  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:36.808993  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.310066  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:37.808459  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.308629  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:38.811741  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.309361  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:39.809037  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.308704  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:40.808398  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.307791  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:41.808294  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.308956  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:42.809502  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.307669  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:43.810175  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.309568  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:44.809320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.309320  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:45.807962  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.311821  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:46.808138  388517 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:12:47.308750  388517 kapi.go:107] duration metric: took 1m7.004080739s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:12:47.309965  388517 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-269722 cluster.
	I1206 09:12:47.310907  388517 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:12:47.312086  388517 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:12:47.313288  388517 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, registry-creds, volcano, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1206 09:12:47.314294  388517 addons.go:530] duration metric: took 1m22.947828238s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner inspektor-gadget registry-creds volcano cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1206 09:12:47.314341  388517 start.go:247] waiting for cluster config update ...
	I1206 09:12:47.314373  388517 start.go:256] writing updated cluster config ...
	I1206 09:12:47.314678  388517 ssh_runner.go:195] Run: rm -f paused
	I1206 09:12:47.321984  388517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:47.325938  388517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.331363  388517 pod_ready.go:94] pod "coredns-66bc5c9577-l7sr8" is "Ready"
	I1206 09:12:47.331382  388517 pod_ready.go:86] duration metric: took 5.423953ms for pod "coredns-66bc5c9577-l7sr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.333935  388517 pod_ready.go:83] waiting for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.339670  388517 pod_ready.go:94] pod "etcd-addons-269722" is "Ready"
	I1206 09:12:47.339686  388517 pod_ready.go:86] duration metric: took 5.735911ms for pod "etcd-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.341852  388517 pod_ready.go:83] waiting for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.348825  388517 pod_ready.go:94] pod "kube-apiserver-addons-269722" is "Ready"
	I1206 09:12:47.348841  388517 pod_ready.go:86] duration metric: took 6.965989ms for pod "kube-apiserver-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.351661  388517 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.728666  388517 pod_ready.go:94] pod "kube-controller-manager-addons-269722" is "Ready"
	I1206 09:12:47.728694  388517 pod_ready.go:86] duration metric: took 377.017246ms for pod "kube-controller-manager-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:47.928250  388517 pod_ready.go:83] waiting for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.326318  388517 pod_ready.go:94] pod "kube-proxy-c2km9" is "Ready"
	I1206 09:12:48.326347  388517 pod_ready.go:86] duration metric: took 398.070754ms for pod "kube-proxy-c2km9" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.527945  388517 pod_ready.go:83] waiting for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925436  388517 pod_ready.go:94] pod "kube-scheduler-addons-269722" is "Ready"
	I1206 09:12:48.925477  388517 pod_ready.go:86] duration metric: took 397.504009ms for pod "kube-scheduler-addons-269722" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:12:48.925497  388517 pod_ready.go:40] duration metric: took 1.603486959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:12:48.968795  388517 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:12:48.970523  388517 out.go:179] * Done! kubectl is now configured to use "addons-269722" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	557d40ab6aa66       56cc512116c8f       5 minutes ago       Running             busybox                                  0                   09f2b56f9baa0       busybox                                    default
	29c2d038bf437       738351fd438f0       11 minutes ago      Running             csi-snapshotter                          0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	5d8ecc80d5382       931dbfd16f87c       11 minutes ago      Running             csi-provisioner                          0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	fd0e1a7571386       e899260153aed       11 minutes ago      Running             liveness-probe                           0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	24d11a8b11e79       e255e073c508c       11 minutes ago      Running             hostpath                                 0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	5ca832afab7b5       88ef14a257f42       11 minutes ago      Running             node-driver-registrar                    0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	c4cccebac4fc4       97fe896f8c07b       11 minutes ago      Running             controller                               0                   9ee054c3901ad       ingress-nginx-controller-6c8bf45fb-ndk8c   ingress-nginx
	2630d4a83ae5f       19a639eda60f0       12 minutes ago      Running             csi-resizer                              0                   a312cf43898ad       csi-hostpath-resizer-0                     kube-system
	5bd7e91038ad6       a1ed5895ba635       12 minutes ago      Running             csi-external-health-monitor-controller   0                   485b2f551e860       csi-hostpathplugin-c5bss                   kube-system
	1ff38ec18e78f       59cbb42146a37       12 minutes ago      Running             csi-attacher                             0                   73074a1a93680       csi-hostpath-attacher-0                    kube-system
	278c91c11ce27       aa61ee9c70bc4       12 minutes ago      Running             volume-snapshot-controller               0                   4bcff1b74bfec       snapshot-controller-7d9fbc56b8-qbp6w       kube-system
	31ec84f4556b1       aa61ee9c70bc4       12 minutes ago      Running             volume-snapshot-controller               0                   49c8968cc1ce1       snapshot-controller-7d9fbc56b8-v9sd5       kube-system
	864a2ecb4396f       884bd0ac01c8f       12 minutes ago      Exited              patch                                    0                   3ddf53bb8795f       ingress-nginx-admission-patch-xpn6k        ingress-nginx
	2850465598faa       e16d1e3a10667       12 minutes ago      Running             local-path-provisioner                   0                   3e10d3adfa610       local-path-provisioner-648f6765c9-g86zf    local-path-storage
	c2e7e0b7588b1       884bd0ac01c8f       12 minutes ago      Exited              create                                   0                   1ca23ac12776f       ingress-nginx-admission-create-kl75g       ingress-nginx
	2774623c95b6c       b6ab53fbfedaa       12 minutes ago      Running             minikube-ingress-dns                     0                   a84f9f0b8a344       kube-ingress-dns-minikube                  kube-system
	d9e6d13d8e418       d5e667c0f2bb6       12 minutes ago      Running             amd-gpu-device-plugin                    0                   479fca73c33e3       amd-gpu-device-plugin-4x5bp                kube-system
	a9394a7445ed6       6e38f40d628db       12 minutes ago      Running             storage-provisioner                      0                   89b1f84c8945f       storage-provisioner                        kube-system
	e636e6172c8c9       52546a367cc9e       12 minutes ago      Running             coredns                                  0                   18cf9f60905af       coredns-66bc5c9577-l7sr8                   kube-system
	d9ab1c94b0adc       8aa150647e88a       12 minutes ago      Running             kube-proxy                               0                   7ce46fc8fe779       kube-proxy-c2km9                           kube-system
	f7319b640fed7       a3e246e9556e9       13 minutes ago      Running             etcd                                     0                   5d2b5e40c2235       etcd-addons-269722                         kube-system
	31363d509c1e7       88320b5498ff2       13 minutes ago      Running             kube-scheduler                           0                   f53f47f2f0dc9       kube-scheduler-addons-269722               kube-system
	c301895eb03e7       01e8bacf0f500       13 minutes ago      Running             kube-controller-manager                  0                   afc5069ef7820       kube-controller-manager-addons-269722      kube-system
	95341ea890f7a       a5f569d49a979       13 minutes ago      Running             kube-apiserver                           0                   fb1d3f9401a55       kube-apiserver-addons-269722               kube-system
	
	
	==> containerd <==
	Dec 06 09:23:09 addons-269722 containerd[831]: time="2025-12-06T09:23:09.924528191Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 06 09:23:09 addons-269722 containerd[831]: time="2025-12-06T09:23:09.928235630Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:23:10 addons-269722 containerd[831]: time="2025-12-06T09:23:10.185653865Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:23:10 addons-269722 containerd[831]: time="2025-12-06T09:23:10.840510969Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:23:10 addons-269722 containerd[831]: time="2025-12-06T09:23:10.840584835Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10979"
	Dec 06 09:23:36 addons-269722 containerd[831]: time="2025-12-06T09:23:36.640054776Z" level=info msg="StopPodSandbox for \"094ce5cbff6d1ea1b1001d7d8646f4c09417ef2620dc864df5b86240ffb4a239\""
	Dec 06 09:23:36 addons-269722 containerd[831]: time="2025-12-06T09:23:36.715598035Z" level=info msg="shim disconnected" id=094ce5cbff6d1ea1b1001d7d8646f4c09417ef2620dc864df5b86240ffb4a239 namespace=k8s.io
	Dec 06 09:23:36 addons-269722 containerd[831]: time="2025-12-06T09:23:36.715971646Z" level=warning msg="cleaning up after shim disconnected" id=094ce5cbff6d1ea1b1001d7d8646f4c09417ef2620dc864df5b86240ffb4a239 namespace=k8s.io
	Dec 06 09:23:36 addons-269722 containerd[831]: time="2025-12-06T09:23:36.716059091Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 06 09:23:36 addons-269722 containerd[831]: time="2025-12-06T09:23:36.797987782Z" level=info msg="TearDown network for sandbox \"094ce5cbff6d1ea1b1001d7d8646f4c09417ef2620dc864df5b86240ffb4a239\" successfully"
	Dec 06 09:23:36 addons-269722 containerd[831]: time="2025-12-06T09:23:36.798073269Z" level=info msg="StopPodSandbox for \"094ce5cbff6d1ea1b1001d7d8646f4c09417ef2620dc864df5b86240ffb4a239\" returns successfully"
	Dec 06 09:24:06 addons-269722 containerd[831]: time="2025-12-06T09:24:06.994638592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98,Uid:5136dd09-352a-44d4-ba6f-089cab8dd1a3,Namespace:local-path-storage,Attempt:0,}"
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.140595919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.140701130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.140712783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.140813923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.216887555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98,Uid:5136dd09-352a-44d4-ba6f-089cab8dd1a3,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"5769400f5ef6b17191e270fb4aedec62969db466103ab321b855874f40edf11c\""
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.220359929Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.223001545Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:24:07 addons-269722 containerd[831]: time="2025-12-06T09:24:07.477956611Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:24:08 addons-269722 containerd[831]: time="2025-12-06T09:24:08.356982320Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:24:08 addons-269722 containerd[831]: time="2025-12-06T09:24:08.357119970Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=13274"
	Dec 06 09:24:18 addons-269722 containerd[831]: time="2025-12-06T09:24:18.924327217Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 06 09:24:18 addons-269722 containerd[831]: time="2025-12-06T09:24:18.928486077Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:24:20 addons-269722 containerd[831]: time="2025-12-06T09:24:20.569051839Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [e636e6172c8c93ebe7783047ae4449227f6f37f80a082ff4fd383ebc5d08fdbe] <==
	[INFO] 10.244.0.8:51474 - 59613 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000209051s
	[INFO] 10.244.0.8:51474 - 58064 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00013173s
	[INFO] 10.244.0.8:51474 - 29072 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084614s
	[INFO] 10.244.0.8:51474 - 28407 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124845s
	[INFO] 10.244.0.8:51474 - 5185 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106747s
	[INFO] 10.244.0.8:51474 - 28903 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097914s
	[INFO] 10.244.0.8:51474 - 44135 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000086701s
	[INFO] 10.244.0.8:42198 - 56025 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124465s
	[INFO] 10.244.0.8:42198 - 58448 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118323s
	[INFO] 10.244.0.8:40240 - 52465 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104193s
	[INFO] 10.244.0.8:40240 - 52746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113004s
	[INFO] 10.244.0.8:49362 - 65347 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126485s
	[INFO] 10.244.0.8:49362 - 110 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000216341s
	[INFO] 10.244.0.8:51040 - 59068 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087119s
	[INFO] 10.244.0.8:51040 - 59346 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118565s
	[INFO] 10.244.0.27:48228 - 49165 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000319642s
	[INFO] 10.244.0.27:40396 - 12915 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001198011s
	[INFO] 10.244.0.27:39038 - 53409 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158695s
	[INFO] 10.244.0.27:59026 - 7807 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134321s
	[INFO] 10.244.0.27:32836 - 36351 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085705s
	[INFO] 10.244.0.27:33578 - 24448 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114082s
	[INFO] 10.244.0.27:49566 - 16674 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003361826s
	[INFO] 10.244.0.27:37372 - 21961 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004334216s
	[INFO] 10.244.0.31:37715 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000570157s
	[INFO] 10.244.0.31:57352 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162981s
	
	
	==> describe nodes <==
	Name:               addons-269722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-269722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-269722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_11_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-269722
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-269722"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-269722
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:24:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:19:50 +0000   Sat, 06 Dec 2025 09:11:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    addons-269722
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 faaa974faf9d46f8a3b502afcdf78e43
	  System UUID:                faaa974f-af9d-46f8-a3b5-02afcdf78e43
	  Boot ID:                    33004088-aa48-42d5-ac29-91fbfe5a6c68
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-ndk8c                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 amd-gpu-device-plugin-4x5bp                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-l7sr8                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-c5bss                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-269722                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-269722                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-269722                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-c2km9                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-269722                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-qbp6w                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-v9sd5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-g86zf                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-269722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-269722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-269722 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-269722 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-269722 event: Registered Node addons-269722 in Controller
	
	
	==> dmesg <==
	[  +0.266835] kauditd_printk_skb: 464 callbacks suppressed
	[  +0.168903] kauditd_printk_skb: 353 callbacks suppressed
	[  +9.726100] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.920301] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 6 09:12] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.529426] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.897097] kauditd_printk_skb: 166 callbacks suppressed
	[  +2.318976] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.568626] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.319087] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.000694] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 53 callbacks suppressed
	[Dec 6 09:18] kauditd_printk_skb: 47 callbacks suppressed
	[ +48.658661] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 6 09:19] kauditd_printk_skb: 67 callbacks suppressed
	[ +10.881930] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000283] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.748225] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 37 callbacks suppressed
	[  +3.911192] kauditd_printk_skb: 177 callbacks suppressed
	[  +1.378691] kauditd_printk_skb: 126 callbacks suppressed
	[Dec 6 09:21] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.000310] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 6 09:23] kauditd_printk_skb: 26 callbacks suppressed
	[Dec 6 09:24] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [f7319b640fed7119b3d158c30e3bc2dd128fc0442cd17b3131fd715d76a44c9a] <==
	{"level":"warn","ts":"2025-12-06T09:12:02.912205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.029086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:02.912295Z","caller":"traceutil/trace.go:172","msg":"trace[2137904910] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"113.394912ms","start":"2025-12-06T09:12:02.798891Z","end":"2025-12-06T09:12:02.912286Z","steps":["trace[2137904910] 'agreement among raft nodes before linearized reading'  (duration: 112.675357ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:02.912373Z","caller":"traceutil/trace.go:172","msg":"trace[1452242551] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"297.5818ms","start":"2025-12-06T09:12:02.614786Z","end":"2025-12-06T09:12:02.912368Z","steps":["trace[1452242551] 'process raft request'  (duration: 296.713443ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:11.568524Z","caller":"traceutil/trace.go:172","msg":"trace[574553895] linearizableReadLoop","detail":"{readStateIndex:1209; appliedIndex:1209; }","duration":"261.730098ms","start":"2025-12-06T09:12:11.306778Z","end":"2025-12-06T09:12:11.568508Z","steps":["trace[574553895] 'read index received'  (duration: 261.726617ms)","trace[574553895] 'applied index is now lower than readState.Index'  (duration: 3.06µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.826038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.650735Z","caller":"traceutil/trace.go:172","msg":"trace[1035894961] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1184; }","duration":"343.946028ms","start":"2025-12-06T09:12:11.306774Z","end":"2025-12-06T09:12:11.650720Z","steps":["trace[1035894961] 'agreement among raft nodes before linearized reading'  (duration: 261.814135ms)","trace[1035894961] 'range keys from in-memory index tree'  (duration: 81.970543ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.650785Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.306763Z","time spent":"344.009881ms","remote":"127.0.0.1:53040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:12:11.651140Z","caller":"traceutil/trace.go:172","msg":"trace[483765702] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"350.392753ms","start":"2025-12-06T09:12:11.300733Z","end":"2025-12-06T09:12:11.651125Z","steps":["trace[483765702] 'process raft request'  (duration: 267.904896ms)","trace[483765702] 'compare'  (duration: 81.445642ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:12:11.651205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:12:11.300717Z","time spent":"350.449818ms","remote":"127.0.0.1:53164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<>"}
	{"level":"warn","ts":"2025-12-06T09:12:11.651419Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.416676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651477Z","caller":"traceutil/trace.go:172","msg":"trace[167194031] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"172.473278ms","start":"2025-12-06T09:12:11.478992Z","end":"2025-12-06T09:12:11.651465Z","steps":["trace[167194031] 'agreement among raft nodes before linearized reading'  (duration: 172.38943ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651605Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.385049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651660Z","caller":"traceutil/trace.go:172","msg":"trace[1143122093] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1185; }","duration":"270.440925ms","start":"2025-12-06T09:12:11.381211Z","end":"2025-12-06T09:12:11.651652Z","steps":["trace[1143122093] 'agreement among raft nodes before linearized reading'  (duration: 270.367937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:11.651812Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.784519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:11.651836Z","caller":"traceutil/trace.go:172","msg":"trace[535987253] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1185; }","duration":"298.810243ms","start":"2025-12-06T09:12:11.353018Z","end":"2025-12-06T09:12:11.651829Z","steps":["trace[535987253] 'agreement among raft nodes before linearized reading'  (duration: 298.76303ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:12:20.929795Z","caller":"traceutil/trace.go:172","msg":"trace[628627548] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"105.667962ms","start":"2025-12-06T09:12:20.824110Z","end":"2025-12-06T09:12:20.929778Z","steps":["trace[628627548] 'process raft request'  (duration: 105.596429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:23.778852Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.603155ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:23.779380Z","caller":"traceutil/trace.go:172","msg":"trace[424992269] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1281; }","duration":"218.131424ms","start":"2025-12-06T09:12:23.561231Z","end":"2025-12-06T09:12:23.779363Z","steps":["trace[424992269] 'range keys from in-memory index tree'  (duration: 217.594054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:12:26.494846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.642654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:12:26.495325Z","caller":"traceutil/trace.go:172","msg":"trace[102060551] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1290; }","duration":"113.581468ms","start":"2025-12-06T09:12:26.381729Z","end":"2025-12-06T09:12:26.495310Z","steps":["trace[102060551] 'range keys from in-memory index tree'  (duration: 112.580581ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:13:20.713154Z","caller":"traceutil/trace.go:172","msg":"trace[1259088558] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"103.020152ms","start":"2025-12-06T09:13:20.609588Z","end":"2025-12-06T09:13:20.712608Z","steps":["trace[1259088558] 'process raft request'  (duration: 102.875042ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:18:37.035751Z","caller":"traceutil/trace.go:172","msg":"trace[10856222] transaction","detail":"{read_only:false; response_revision:2013; number_of_response:1; }","duration":"171.241207ms","start":"2025-12-06T09:18:36.864442Z","end":"2025-12-06T09:18:37.035683Z","steps":["trace[10856222] 'process raft request'  (duration: 170.245197ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:21:15.400819Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1792}
	{"level":"info","ts":"2025-12-06T09:21:15.571936Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1792,"took":"167.85031ms","hash":732329829,"current-db-size-bytes":10612736,"current-db-size":"11 MB","current-db-size-in-use-bytes":7192576,"current-db-size-in-use":"7.2 MB"}
	{"level":"info","ts":"2025-12-06T09:21:15.572113Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":732329829,"revision":1792,"compact-revision":-1}
	
	
	==> kernel <==
	 09:24:21 up 13 min,  0 users,  load average: 0.77, 0.72, 0.64
	Linux addons-269722 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [95341ea890f7aa882f4bc2a6906002451241d8c5faa071707f5de92b27e20ce7] <==
	I1206 09:18:53.018886       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1206 09:18:53.078403       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1206 09:18:53.122449       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1206 09:18:53.135000       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1206 09:18:53.224059       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1206 09:18:53.568132       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1206 09:18:53.690499       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1206 09:18:53.805719       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1206 09:18:53.827769       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1206 09:18:53.952581       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1206 09:18:54.124471       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1206 09:18:54.247215       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	I1206 09:18:54.320955       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1206 09:18:54.331045       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1206 09:18:54.378594       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1206 09:18:55.332669       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1206 09:18:55.731925       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1206 09:19:11.494382       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:53920: use of closed network connection
	E1206 09:19:11.675647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:53948: use of closed network connection
	I1206 09:19:20.977935       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.33.16"}
	E1206 09:19:32.614060       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:59722: use of closed network connection
	I1206 09:19:42.464058       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 09:19:42.639393       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.3.234"}
	I1206 09:19:58.097224       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1206 09:21:17.002283       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c301895eb03e76a7f98c21fd67491f3e3114e008ac0bc660fb3871dde69fdff8] <==
	E1206 09:23:25.374368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:23:32.253789       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:23:32.255917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:23:36.881960       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:23:36.883665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:23:42.136792       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:23:42.138159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:23:54.588929       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:23:54.590027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:03.806139       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:03.807287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:06.683202       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:06.684723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:08.970128       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:08.971141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:09.475883       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:09.476923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:11.076827       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:11.077922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:15.965796       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:15.967106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:17.616056       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:17.617210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:24:19.893736       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:24:19.895163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [d9ab1c94b0adcd19eace1b7a10c0f065d7c953fc676839d82393eaab4f0c1819] <==
	I1206 09:11:27.430778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:11:27.531232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:11:27.531444       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.220"]
	E1206 09:11:27.531895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:11:27.678473       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:11:27.678923       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:11:27.679749       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:11:27.716021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:11:27.719059       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:11:27.719117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:11:27.726703       1 config.go:200] "Starting service config controller"
	I1206 09:11:27.726733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:11:27.726750       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:11:27.726754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:11:27.730726       1 config.go:309] "Starting node config controller"
	I1206 09:11:27.730967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:11:27.730985       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:11:27.726764       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:11:27.736817       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:11:27.827489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:11:27.827527       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:11:27.837415       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [31363d509c1e784ea3123303af98a26bde6cf40b74abff49509bf33b99ca8f00] <==
	E1206 09:11:17.083720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:11:17.083797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:17.083954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:17.085026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:17.085610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:11:17.085977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:17.086442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:17.086495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:17.086552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:11:17.086667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:17.086930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:11:17.939163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:11:17.952354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:11:17.975596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:11:18.009464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:11:18.049043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:11:18.084056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:11:18.094385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:11:18.198477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:11:18.257306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:11:18.287686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:11:18.314012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:11:18.315115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:11:18.580055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:11:21.477327       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:23:36 addons-269722 kubelet[1529]: I1206 09:23:36.940784    1529 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f2c4aeab-06e7-4e32-be78-14d6fc196de4-script\") on node \"addons-269722\" DevicePath \"\""
	Dec 06 09:23:36 addons-269722 kubelet[1529]: I1206 09:23:36.940827    1529 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f2c4aeab-06e7-4e32-be78-14d6fc196de4-data\") on node \"addons-269722\" DevicePath \"\""
	Dec 06 09:23:36 addons-269722 kubelet[1529]: I1206 09:23:36.940837    1529 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xxfh\" (UniqueName: \"kubernetes.io/projected/f2c4aeab-06e7-4e32-be78-14d6fc196de4-kube-api-access-7xxfh\") on node \"addons-269722\" DevicePath \"\""
	Dec 06 09:23:37 addons-269722 kubelet[1529]: I1206 09:23:37.925557    1529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c4aeab-06e7-4e32-be78-14d6fc196de4" path="/var/lib/kubelet/pods/f2c4aeab-06e7-4e32-be78-14d6fc196de4/volumes"
	Dec 06 09:23:46 addons-269722 kubelet[1529]: I1206 09:23:46.921466    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l7sr8" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:23:49 addons-269722 kubelet[1529]: E1206 09:23:49.922883    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:23:49 addons-269722 kubelet[1529]: E1206 09:23:49.926422    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:24:02 addons-269722 kubelet[1529]: E1206 09:24:02.923425    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:24:03 addons-269722 kubelet[1529]: I1206 09:24:03.922195    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4x5bp" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:24:04 addons-269722 kubelet[1529]: E1206 09:24:04.922573    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:24:06 addons-269722 kubelet[1529]: I1206 09:24:06.770875    1529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5136dd09-352a-44d4-ba6f-089cab8dd1a3-data\") pod \"helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98\" (UID: \"5136dd09-352a-44d4-ba6f-089cab8dd1a3\") " pod="local-path-storage/helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98"
	Dec 06 09:24:06 addons-269722 kubelet[1529]: I1206 09:24:06.770922    1529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5136dd09-352a-44d4-ba6f-089cab8dd1a3-script\") pod \"helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98\" (UID: \"5136dd09-352a-44d4-ba6f-089cab8dd1a3\") " pod="local-path-storage/helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98"
	Dec 06 09:24:06 addons-269722 kubelet[1529]: I1206 09:24:06.770939    1529 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sczl9\" (UniqueName: \"kubernetes.io/projected/5136dd09-352a-44d4-ba6f-089cab8dd1a3-kube-api-access-sczl9\") pod \"helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98\" (UID: \"5136dd09-352a-44d4-ba6f-089cab8dd1a3\") " pod="local-path-storage/helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98"
	Dec 06 09:24:06 addons-269722 kubelet[1529]: I1206 09:24:06.922187    1529 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:24:08 addons-269722 kubelet[1529]: E1206 09:24:08.357654    1529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:24:08 addons-269722 kubelet[1529]: E1206 09:24:08.357705    1529 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:24:08 addons-269722 kubelet[1529]: E1206 09:24:08.357788    1529 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98_local-path-storage(5136dd09-352a-44d4-ba6f-089cab8dd1a3): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:24:08 addons-269722 kubelet[1529]: E1206 09:24:08.357820    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98" podUID="5136dd09-352a-44d4-ba6f-089cab8dd1a3"
	Dec 06 09:24:08 addons-269722 kubelet[1529]: E1206 09:24:08.523176    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98" podUID="5136dd09-352a-44d4-ba6f-089cab8dd1a3"
	Dec 06 09:24:14 addons-269722 kubelet[1529]: E1206 09:24:14.923355    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9f2e16bd-5c5a-4de7-8925-9e8608d94e2b"
	Dec 06 09:24:17 addons-269722 kubelet[1529]: E1206 09:24:17.922573    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="955ebad4-b055-4cbf-95e3-243af9483d37"
	Dec 06 09:24:21 addons-269722 kubelet[1529]: E1206 09:24:21.259235    1529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:24:21 addons-269722 kubelet[1529]: E1206 09:24:21.259422    1529 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:24:21 addons-269722 kubelet[1529]: E1206 09:24:21.260389    1529 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98_local-path-storage(5136dd09-352a-44d4-ba6f-089cab8dd1a3): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:24:21 addons-269722 kubelet[1529]: E1206 09:24:21.260586    1529 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98" podUID="5136dd09-352a-44d4-ba6f-089cab8dd1a3"
	
	
	==> storage-provisioner [a9394a7445ed60a376c7cd3e75aaac67b588412df8710faeea1ea9b282a9b119] <==
	W1206 09:23:57.117960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:23:59.121349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:23:59.126180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:01.130793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:01.136987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:03.139904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:03.149846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:05.153307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:05.158915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:07.163238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:07.169788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:09.172835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:09.181313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:11.184436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:11.189203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:13.192788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:13.205155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:15.209554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:15.217037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:17.221589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:17.227009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:19.230152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:19.237840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:21.241197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:21.252927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-269722 -n addons-269722
helpers_test.go:269: (dbg) Run:  kubectl --context addons-269722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98: exit status 1 (88.565646ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-269722/192.168.39.220
	Start Time:       Sat, 06 Dec 2025 09:19:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tppjg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tppjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m40s                 default-scheduler  Successfully assigned default/nginx to addons-269722
	  Normal   Pulling    97s (x5 over 4m39s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     96s (x5 over 4m38s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     96s (x5 over 4m38s)   kubelet            Error: ErrImagePull
	  Warning  Failed     46s (x15 over 4m38s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    8s (x18 over 4m38s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-269722/192.168.39.220
	Start Time:       Sat, 06 Dec 2025 09:19:41 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8jd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-sn8jd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m41s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-269722
	  Warning  Failed     4m24s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    114s (x5 over 4m41s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     113s (x4 over 4m40s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     113s (x5 over 4m40s)  kubelet            Error: ErrImagePull
	  Warning  Failed     33s (x15 over 4m39s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    5s (x17 over 4m39s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z99d9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-z99d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kl75g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xpn6k" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-269722 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kl75g ingress-nginx-admission-patch-xpn6k helper-pod-create-pvc-4bfef674-e898-43b5-af86-79f7f1aa5f98: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.83379252s)
--- FAIL: TestAddons/parallel/LocalPath (344.90s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (301.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-715379 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-715379 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-715379 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-715379 --alsologtostderr -v=1] stderr:
I1206 09:33:22.591189  399561 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:22.591326  399561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:22.591336  399561 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:22.591339  399561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:22.591519  399561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:33:22.592169  399561 mustload.go:66] Loading cluster: functional-715379
I1206 09:33:22.593168  399561 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:22.595328  399561 host.go:66] Checking if "functional-715379" exists ...
I1206 09:33:22.595604  399561 api_server.go:166] Checking apiserver status ...
I1206 09:33:22.595652  399561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:33:22.598215  399561 main.go:143] libmachine: domain functional-715379 has defined MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:22.598655  399561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:93:ff", ip: ""} in network mk-functional-715379: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:24 +0000 UTC Type:0 Mac:52:54:00:9c:93:ff Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:functional-715379 Clientid:01:52:54:00:9c:93:ff}
I1206 09:33:22.598692  399561 main.go:143] libmachine: domain functional-715379 has defined IP address 192.168.39.160 and MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:22.598854  399561 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-715379/id_rsa Username:docker}
I1206 09:33:22.699991  399561 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5447/cgroup
W1206 09:33:22.715321  399561 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5447/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 09:33:22.715404  399561 ssh_runner.go:195] Run: ls
I1206 09:33:22.721389  399561 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8441/healthz ...
I1206 09:33:22.729935  399561 api_server.go:279] https://192.168.39.160:8441/healthz returned 200:
ok
W1206 09:33:22.729979  399561 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1206 09:33:22.730128  399561 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:22.730144  399561 addons.go:70] Setting dashboard=true in profile "functional-715379"
I1206 09:33:22.730156  399561 addons.go:239] Setting addon dashboard=true in "functional-715379"
I1206 09:33:22.730185  399561 host.go:66] Checking if "functional-715379" exists ...
I1206 09:33:22.733258  399561 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1206 09:33:22.734437  399561 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1206 09:33:22.735512  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1206 09:33:22.735532  399561 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1206 09:33:22.737909  399561 main.go:143] libmachine: domain functional-715379 has defined MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:22.738257  399561 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:93:ff", ip: ""} in network mk-functional-715379: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:24 +0000 UTC Type:0 Mac:52:54:00:9c:93:ff Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:functional-715379 Clientid:01:52:54:00:9c:93:ff}
I1206 09:33:22.738278  399561 main.go:143] libmachine: domain functional-715379 has defined IP address 192.168.39.160 and MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:22.738392  399561 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-715379/id_rsa Username:docker}
I1206 09:33:22.856069  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1206 09:33:22.856091  399561 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1206 09:33:22.884003  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1206 09:33:22.884028  399561 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1206 09:33:22.907977  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1206 09:33:22.907997  399561 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1206 09:33:22.928834  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1206 09:33:22.928854  399561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1206 09:33:22.959446  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1206 09:33:22.959476  399561 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1206 09:33:22.991792  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1206 09:33:22.991819  399561 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1206 09:33:23.014722  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1206 09:33:23.014743  399561 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1206 09:33:23.047950  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1206 09:33:23.047970  399561 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1206 09:33:23.073563  399561 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:33:23.073592  399561 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1206 09:33:23.099507  399561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:33:24.040966  399561 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-715379 addons enable metrics-server

                                                
                                                
I1206 09:33:24.041973  399561 addons.go:202] Writing out "functional-715379" config to set dashboard=true...
W1206 09:33:24.042211  399561 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1206 09:33:24.042844  399561 kapi.go:59] client config for functional-715379: &rest.Config{Host:"https://192.168.39.160:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.key", CAFile:"/home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1206 09:33:24.043299  399561 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1206 09:33:24.043314  399561 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1206 09:33:24.043320  399561 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1206 09:33:24.043323  399561 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1206 09:33:24.043327  399561 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1206 09:33:24.057709  399561 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  28f13dae-6dee-43d2-8da2-22e8cec94a2d 838 0 2025-12-06 09:33:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-06 09:33:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.102.177.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.102.177.204],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1206 09:33:24.057889  399561 out.go:285] * Launching proxy ...
* Launching proxy ...
I1206 09:33:24.057964  399561 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-715379 proxy --port 36195]
I1206 09:33:24.058365  399561 dashboard.go:159] Waiting for kubectl to output host:port ...
I1206 09:33:24.100565  399561 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1206 09:33:24.100609  399561 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1206 09:33:24.109377  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c6475fe-847d-4a18-a947-3729e5dcacef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365680 TLS:<nil>}
I1206 09:33:24.109458  399561 retry.go:31] will retry after 59.903µs: Temporary Error: unexpected response code: 503
I1206 09:33:24.116155  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c15405a-343d-4d26-8c08-7c8c808416b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc00041dd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292a00 TLS:<nil>}
I1206 09:33:24.116214  399561 retry.go:31] will retry after 85.116µs: Temporary Error: unexpected response code: 503
I1206 09:33:24.119854  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[86bcd5fa-ee9b-44af-934f-20bfc3fc72ca] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365900 TLS:<nil>}
I1206 09:33:24.119914  399561 retry.go:31] will retry after 128.236µs: Temporary Error: unexpected response code: 503
I1206 09:33:24.127776  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c27f08bf-fa8f-4037-b1c6-de4992079556] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc00041dec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292b40 TLS:<nil>}
I1206 09:33:24.127844  399561 retry.go:31] will retry after 382.857µs: Temporary Error: unexpected response code: 503
I1206 09:33:24.132824  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6fb6d761-2a97-46f9-9363-c6ec97d4eb1c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365cc0 TLS:<nil>}
I1206 09:33:24.132890  399561 retry.go:31] will retry after 450.699µs: Temporary Error: unexpected response code: 503
I1206 09:33:24.139037  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d319321a-f365-436f-8458-64bb5ec28dbd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc0015a8980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292c80 TLS:<nil>}
I1206 09:33:24.139100  399561 retry.go:31] will retry after 843.754µs: Temporary Error: unexpected response code: 503
I1206 09:33:24.149179  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e413082-7bae-4b72-90ff-fea970ab8fbf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000432c80 TLS:<nil>}
I1206 09:33:24.149215  399561 retry.go:31] will retry after 973.799µs: Temporary Error: unexpected response code: 503
I1206 09:33:24.155875  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f2b920da-c429-4b0b-a5c8-4b7e6d2c96be] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc00156e0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292dc0 TLS:<nil>}
I1206 09:33:24.155934  399561 retry.go:31] will retry after 1.519257ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.160743  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3319c9b-34d2-4adb-8416-f658a3fba8ba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000365e00 TLS:<nil>}
I1206 09:33:24.160799  399561 retry.go:31] will retry after 3.11915ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.168591  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ebd8b65f-7b60-4a7d-a412-aad16748a826] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc0015a8a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292f00 TLS:<nil>}
I1206 09:33:24.168654  399561 retry.go:31] will retry after 3.806718ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.175629  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa86cac4-9cd0-416a-92c6-0fe34f2c9af7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000432f00 TLS:<nil>}
I1206 09:33:24.175671  399561 retry.go:31] will retry after 5.847599ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.184630  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[798a02c6-c6ff-42d3-bc22-e11f7b36f0f3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc0015a8b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293040 TLS:<nil>}
I1206 09:33:24.184666  399561 retry.go:31] will retry after 5.450708ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.196004  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f305e886-0728-4fc5-aca4-1c55d7c02afb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc00156e1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000433180 TLS:<nil>}
I1206 09:33:24.196061  399561 retry.go:31] will retry after 18.797994ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.226075  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4e103473-b54c-4c54-bb70-08b0fe46915a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc0015a8c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000050c80 TLS:<nil>}
I1206 09:33:24.226132  399561 retry.go:31] will retry after 23.182287ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.258964  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4371148e-7bbf-4b35-b045-ec8b8fc76c62] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004332c0 TLS:<nil>}
I1206 09:33:24.259034  399561 retry.go:31] will retry after 30.595536ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.307403  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bb618e8e-d077-4ec5-9c14-b197cfa3ab13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc000c71f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293180 TLS:<nil>}
I1206 09:33:24.307468  399561 retry.go:31] will retry after 60.779107ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.373754  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f764efa4-8aa9-4b5a-988e-2bdf3c691e4c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc00167a040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002932c0 TLS:<nil>}
I1206 09:33:24.373853  399561 retry.go:31] will retry after 67.473928ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.445328  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7cd28c1f-0f9a-4204-bd0e-3e8d1087a87f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc00156e380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000050f00 TLS:<nil>}
I1206 09:33:24.445410  399561 retry.go:31] will retry after 52.68211ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.503204  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[27065e19-aafb-42f6-a752-667536ebf2e9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc0008a2940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000051040 TLS:<nil>}
I1206 09:33:24.503265  399561 retry.go:31] will retry after 218.744924ms: Temporary Error: unexpected response code: 503
I1206 09:33:24.726579  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aa90b752-5628-4434-a545-5122eb8b0680] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:24 GMT]] Body:0xc0008a2a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ce780 TLS:<nil>}
I1206 09:33:24.726661  399561 retry.go:31] will retry after 308.474618ms: Temporary Error: unexpected response code: 503
I1206 09:33:25.040635  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9aa121c2-43b0-4221-8035-8f81d94c66a4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:25 GMT]] Body:0xc00156e480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cea00 TLS:<nil>}
I1206 09:33:25.040714  399561 retry.go:31] will retry after 189.151255ms: Temporary Error: unexpected response code: 503
I1206 09:33:25.233835  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d06d4538-709e-437c-be75-f587e104935f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:25 GMT]] Body:0xc00167a0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000512c0 TLS:<nil>}
I1206 09:33:25.233932  399561 retry.go:31] will retry after 315.545243ms: Temporary Error: unexpected response code: 503
I1206 09:33:25.553365  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93501d97-ae34-499d-9892-e29ff01a7c23] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:25 GMT]] Body:0xc0015a8e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293400 TLS:<nil>}
I1206 09:33:25.553430  399561 retry.go:31] will retry after 1.036193541s: Temporary Error: unexpected response code: 503
I1206 09:33:26.593045  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fc682bd-476b-4baf-86dd-230bbb4219a5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:26 GMT]] Body:0xc00167a140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000433400 TLS:<nil>}
I1206 09:33:26.593118  399561 retry.go:31] will retry after 1.529614135s: Temporary Error: unexpected response code: 503
I1206 09:33:28.127883  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[86ca4cdc-f28c-41e6-940b-2d4930a5734d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:28 GMT]] Body:0xc00156e5c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293540 TLS:<nil>}
I1206 09:33:28.127946  399561 retry.go:31] will retry after 1.949490612s: Temporary Error: unexpected response code: 503
I1206 09:33:30.082162  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[45b19b4e-4213-485d-ac7a-6ee3c03f38ad] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:30 GMT]] Body:0xc00167a240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000051400 TLS:<nil>}
I1206 09:33:30.082254  399561 retry.go:31] will retry after 3.328981643s: Temporary Error: unexpected response code: 503
I1206 09:33:33.416923  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6774174-f9e7-4737-a1fb-7baa9cce7b22] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:33 GMT]] Body:0xc0015a8f40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000433540 TLS:<nil>}
I1206 09:33:33.417019  399561 retry.go:31] will retry after 3.510201813s: Temporary Error: unexpected response code: 503
I1206 09:33:36.932108  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4793fd09-e2f7-4335-9b2d-d279076f81ce] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:36 GMT]] Body:0xc00156e6c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293680 TLS:<nil>}
I1206 09:33:36.932179  399561 retry.go:31] will retry after 3.882794125s: Temporary Error: unexpected response code: 503
I1206 09:33:40.820097  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b834b486-e910-49c0-a14d-c42d8937630d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:40 GMT]] Body:0xc00156e740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000433680 TLS:<nil>}
I1206 09:33:40.820160  399561 retry.go:31] will retry after 7.770071844s: Temporary Error: unexpected response code: 503
I1206 09:33:48.593815  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de1ddf6c-79d8-424f-b999-c8586b86c112] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:48 GMT]] Body:0xc00167a380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000051540 TLS:<nil>}
I1206 09:33:48.593888  399561 retry.go:31] will retry after 6.872753107s: Temporary Error: unexpected response code: 503
I1206 09:33:55.470427  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a121587-e8ca-46dc-a9bb-c45bed55bf75] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:33:55 GMT]] Body:0xc00156e880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000517c0 TLS:<nil>}
I1206 09:33:55.470496  399561 retry.go:31] will retry after 25.392624025s: Temporary Error: unexpected response code: 503
I1206 09:34:20.869436  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dec4d761-dee1-4898-8c33-f15cd6c4d7ef] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:34:20 GMT]] Body:0xc00156e900 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004337c0 TLS:<nil>}
I1206 09:34:20.869502  399561 retry.go:31] will retry after 31.462424023s: Temporary Error: unexpected response code: 503
I1206 09:34:52.340032  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[632bc6d2-6032-43a7-9b5c-c9c53d7f3b0a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:34:52 GMT]] Body:0xc00167a500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000051900 TLS:<nil>}
I1206 09:34:52.340130  399561 retry.go:31] will retry after 38.827689133s: Temporary Error: unexpected response code: 503
I1206 09:35:31.174058  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8bbb840-c3ec-438d-9fee-dbeafdd1d26d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:35:31 GMT]] Body:0xc00167a040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000432000 TLS:<nil>}
I1206 09:35:31.174129  399561 retry.go:31] will retry after 1m14.754870128s: Temporary Error: unexpected response code: 503
I1206 09:36:45.933616  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7dd0b80e-c0e0-4374-80a9-53c6edc3ceb2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:36:45 GMT]] Body:0xc00156e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292280 TLS:<nil>}
I1206 09:36:45.933719  399561 retry.go:31] will retry after 53.287707632s: Temporary Error: unexpected response code: 503
I1206 09:37:39.225872  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[25097554-c209-41d5-a11a-992537a8dd55] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:37:39 GMT]] Body:0xc00167a040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000432140 TLS:<nil>}
I1206 09:37:39.225961  399561 retry.go:31] will retry after 31.695492326s: Temporary Error: unexpected response code: 503
I1206 09:38:10.925665  399561 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[13baf32d-ca71-48b2-8fa0-831259501971] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:38:10 GMT]] Body:0xc00156e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000432780 TLS:<nil>}
I1206 09:38:10.925772  399561 retry.go:31] will retry after 45.209639899s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-715379 -n functional-715379
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 logs -n 25: (1.254965599s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-715379 ssh sudo umount -f /mount-9p                                                                                  │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh findmnt -T /mount-9p | grep 9p                                                                            │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdspecific-port61418453/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ ssh            │ functional-715379 ssh findmnt -T /mount-9p | grep 9p                                                                            │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh -- ls -la /mount-9p                                                                                       │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh sudo umount -f /mount-9p                                                                                  │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount2 --alsologtostderr -v=1              │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount1 --alsologtostderr -v=1              │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount3 --alsologtostderr -v=1              │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ ssh            │ functional-715379 ssh findmnt -T /mount1                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ ssh            │ functional-715379 ssh findmnt -T /mount1                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh findmnt -T /mount2                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh findmnt -T /mount3                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ mount          │ -p functional-715379 --kill=true                                                                                                │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-715379 --alsologtostderr -v=1                                                                  │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ update-context │ functional-715379 update-context --alsologtostderr -v=2                                                                         │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-715379 update-context --alsologtostderr -v=2                                                                         │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-715379 update-context --alsologtostderr -v=2                                                                         │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format short --alsologtostderr                                                                     │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format json --alsologtostderr                                                                      │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format table --alsologtostderr                                                                     │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format yaml --alsologtostderr                                                                      │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh pgrep buildkitd                                                                                           │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ image          │ functional-715379 image build -t localhost/my-image:functional-715379 testdata/build --alsologtostderr                          │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls                                                                                                      │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:33:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:33:10.718401  399186 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:33:10.718506  399186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.718515  399186 out.go:374] Setting ErrFile to fd 2...
	I1206 09:33:10.718522  399186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.718974  399186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:33:10.719531  399186 out.go:368] Setting JSON to false
	I1206 09:33:10.720768  399186 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8141,"bootTime":1765005450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:33:10.720836  399186 start.go:143] virtualization: kvm guest
	I1206 09:33:10.722601  399186 out.go:179] * [functional-715379] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:33:10.724144  399186 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:33:10.724163  399186 notify.go:221] Checking for updates...
	I1206 09:33:10.726252  399186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:33:10.727407  399186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:33:10.728380  399186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:33:10.729438  399186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:33:10.730564  399186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:33:10.732295  399186 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:33:10.732971  399186 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:33:10.766503  399186 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:33:10.768236  399186 start.go:309] selected driver: kvm2
	I1206 09:33:10.768258  399186 start.go:927] validating driver "kvm2" against &{Name:functional-715379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-715379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:33:10.768384  399186 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:33:10.770464  399186 out.go:203] 
	W1206 09:33:10.771650  399186 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:33:10.772738  399186 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3934e08645b64       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   83334eaba984d       busybox-mount                               default
	39da92c476218       5107333e08a87       5 minutes ago       Running             mysql                     0                   536c9cd032d24       mysql-5bb876957f-lv58m                      default
	aaf4a154d4a0e       9056ab77afb8e       5 minutes ago       Running             echo-server               0                   9f8de906c491b       hello-node-75c85bcc94-pwg8d                 default
	f91e424d91760       6e38f40d628db       5 minutes ago       Running             storage-provisioner       4                   40743e1a40542       storage-provisioner                         kube-system
	839564bcdbe31       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       3                   40743e1a40542       storage-provisioner                         kube-system
	e95fff93d800e       52546a367cc9e       5 minutes ago       Running             coredns                   2                   3473253f1c11f       coredns-66bc5c9577-w4crl                    kube-system
	4ae120c0ef398       8aa150647e88a       5 minutes ago       Running             kube-proxy                2                   1abd2d5923be8       kube-proxy-nscwz                            kube-system
	b8c7b6eb44b2e       a5f569d49a979       5 minutes ago       Running             kube-apiserver            0                   261c4aae752cd       kube-apiserver-functional-715379            kube-system
	5534747c5e4a1       01e8bacf0f500       5 minutes ago       Running             kube-controller-manager   2                   defe7964445a7       kube-controller-manager-functional-715379   kube-system
	bb612a274e0af       88320b5498ff2       5 minutes ago       Running             kube-scheduler            2                   6920096e7b156       kube-scheduler-functional-715379            kube-system
	1bb47db05c08b       a3e246e9556e9       5 minutes ago       Running             etcd                      2                   3191fa856ea72       etcd-functional-715379                      kube-system
	e9812d4774ff1       01e8bacf0f500       6 minutes ago       Exited              kube-controller-manager   1                   defe7964445a7       kube-controller-manager-functional-715379   kube-system
	3a8e87a014af6       a3e246e9556e9       6 minutes ago       Exited              etcd                      1                   3191fa856ea72       etcd-functional-715379                      kube-system
	99f2a113f5da2       88320b5498ff2       6 minutes ago       Exited              kube-scheduler            1                   6920096e7b156       kube-scheduler-functional-715379            kube-system
	ffe447f98c0e8       52546a367cc9e       6 minutes ago       Exited              coredns                   1                   3473253f1c11f       coredns-66bc5c9577-w4crl                    kube-system
	b70b38e2d71c3       8aa150647e88a       6 minutes ago       Exited              kube-proxy                1                   1abd2d5923be8       kube-proxy-nscwz                            kube-system
	
	
	==> containerd <==
	Dec 06 09:35:02 functional-715379 containerd[4525]: time="2025-12-06T09:35:02.481636283Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 06 09:35:02 functional-715379 containerd[4525]: time="2025-12-06T09:35:02.485350639Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:35:02 functional-715379 containerd[4525]: time="2025-12-06T09:35:02.736994240Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:35:03 functional-715379 containerd[4525]: time="2025-12-06T09:35:03.392086847Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:35:03 functional-715379 containerd[4525]: time="2025-12-06T09:35:03.392169137Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Dec 06 09:36:04 functional-715379 containerd[4525]: time="2025-12-06T09:36:04.480646550Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 06 09:36:04 functional-715379 containerd[4525]: time="2025-12-06T09:36:04.483613550Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:04 functional-715379 containerd[4525]: time="2025-12-06T09:36:04.751600411Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:05 functional-715379 containerd[4525]: time="2025-12-06T09:36:05.419518076Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:36:05 functional-715379 containerd[4525]: time="2025-12-06T09:36:05.419670736Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 06 09:36:13 functional-715379 containerd[4525]: time="2025-12-06T09:36:13.481133981Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 06 09:36:13 functional-715379 containerd[4525]: time="2025-12-06T09:36:13.488416854Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:13 functional-715379 containerd[4525]: time="2025-12-06T09:36:13.742988247Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:14 functional-715379 containerd[4525]: time="2025-12-06T09:36:14.398302674Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:36:14 functional-715379 containerd[4525]: time="2025-12-06T09:36:14.398385364Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Dec 06 09:36:19 functional-715379 containerd[4525]: time="2025-12-06T09:36:19.481482840Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Dec 06 09:36:19 functional-715379 containerd[4525]: time="2025-12-06T09:36:19.486199828Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:19 functional-715379 containerd[4525]: time="2025-12-06T09:36:19.763726711Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:20 functional-715379 containerd[4525]: time="2025-12-06T09:36:20.423110062Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:36:20 functional-715379 containerd[4525]: time="2025-12-06T09:36:20.423195479Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Dec 06 09:36:28 functional-715379 containerd[4525]: time="2025-12-06T09:36:28.481046706Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 06 09:36:28 functional-715379 containerd[4525]: time="2025-12-06T09:36:28.483855106Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:28 functional-715379 containerd[4525]: time="2025-12-06T09:36:28.726769516Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:36:29 functional-715379 containerd[4525]: time="2025-12-06T09:36:29.385473019Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:36:29 functional-715379 containerd[4525]: time="2025-12-06T09:36:29.385588761Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [e95fff93d800ebb4020c57425eff27d077155035afe580120deb1cf0ffa236b9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52569 - 49207 "HINFO IN 2592115185418839874.7134750291161745495. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.08797264s
	
	
	==> coredns [ffe447f98c0e8bf1f37fe60dfda5c2b53943736e141bf7a457b99efad7d86db8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46473 - 11742 "HINFO IN 6274865240676304577.1924407954027326078. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.160460036s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=461": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-715379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-715379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-715379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:30:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-715379
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:38:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:33:42 +0000   Sat, 06 Dec 2025 09:30:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:33:42 +0000   Sat, 06 Dec 2025 09:30:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:33:42 +0000   Sat, 06 Dec 2025 09:30:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:33:42 +0000   Sat, 06 Dec 2025 09:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    functional-715379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 522886bcddf24fd391c46bbe382f6c4d
	  System UUID:                522886bc-ddf2-4fd3-91c4-6bbe382f6c4d
	  Boot ID:                    16e62390-a490-422d-a0eb-821ce79700c3
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-pwg8d                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  default                     hello-node-connect-7d85dfc575-9trdj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  default                     mysql-5bb876957f-lv58m                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m20s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 coredns-66bc5c9577-w4crl                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m33s
	  kube-system                 etcd-functional-715379                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m40s
	  kube-system                 kube-apiserver-functional-715379              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-controller-manager-functional-715379     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 kube-proxy-nscwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-scheduler-functional-715379              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2jpwj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pfrh4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m42s                  kube-proxy       
	  Normal  Starting                 6m48s                  kube-proxy       
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  Starting                 7m39s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s                  kubelet          Node functional-715379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s                  kubelet          Node functional-715379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s                  kubelet          Node functional-715379 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m38s                  kubelet          Node functional-715379 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     7m34s                  cidrAllocator    Node functional-715379 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           7m34s                  node-controller  Node functional-715379 event: Registered Node functional-715379 in Controller
	  Normal  Starting                 6m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node functional-715379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node functional-715379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s (x7 over 6m29s)  kubelet          Node functional-715379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m24s                  node-controller  Node functional-715379 event: Registered Node functional-715379 in Controller
	  Normal  Starting                 5m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m46s (x8 over 5m46s)  kubelet          Node functional-715379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s (x8 over 5m46s)  kubelet          Node functional-715379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x7 over 5m46s)  kubelet          Node functional-715379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m40s                  node-controller  Node functional-715379 event: Registered Node functional-715379 in Controller
	
	
	==> dmesg <==
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.079835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100446] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.090420] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.116265] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.182896] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:31] kauditd_printk_skb: 265 callbacks suppressed
	[ +21.604799] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.872374] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.064294] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.790011] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.933410] kauditd_printk_skb: 28 callbacks suppressed
	[Dec 6 09:32] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.118680] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.917854] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.248432] kauditd_printk_skb: 63 callbacks suppressed
	[  +3.519669] kauditd_printk_skb: 59 callbacks suppressed
	[  +4.120396] kauditd_printk_skb: 55 callbacks suppressed
	[Dec 6 09:33] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.000080] kauditd_printk_skb: 120 callbacks suppressed
	[  +2.950233] kauditd_printk_skb: 147 callbacks suppressed
	[  +5.127785] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.226026] crun[8460]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.722275] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [1bb47db05c08beebc85ace42b7d7d0a8e026be57973b887bd7cdce59546987b8] <==
	{"level":"info","ts":"2025-12-06T09:33:11.085434Z","caller":"traceutil/trace.go:172","msg":"trace[471396808] transaction","detail":"{read_only:false; response_revision:716; number_of_response:1; }","duration":"179.446366ms","start":"2025-12-06T09:33:10.905979Z","end":"2025-12-06T09:33:11.085425Z","steps":["trace[471396808] 'process raft request'  (duration: 179.370948ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:12.604056Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"289.547841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:12.604280Z","caller":"traceutil/trace.go:172","msg":"trace[1297274851] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:749; }","duration":"290.605959ms","start":"2025-12-06T09:33:12.313663Z","end":"2025-12-06T09:33:12.604269Z","steps":["trace[1297274851] 'range keys from in-memory index tree'  (duration: 289.501528ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:33:14.457341Z","caller":"traceutil/trace.go:172","msg":"trace[1674933141] linearizableReadLoop","detail":"{readStateIndex:831; appliedIndex:831; }","duration":"283.139177ms","start":"2025-12-06T09:33:14.174187Z","end":"2025-12-06T09:33:14.457326Z","steps":["trace[1674933141] 'read index received'  (duration: 283.134148ms)","trace[1674933141] 'applied index is now lower than readState.Index'  (duration: 4.48µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:33:14.457443Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.240945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:14.457459Z","caller":"traceutil/trace.go:172","msg":"trace[1994588624] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:749; }","duration":"283.270917ms","start":"2025-12-06T09:33:14.174183Z","end":"2025-12-06T09:33:14.457454Z","steps":["trace[1994588624] 'agreement among raft nodes before linearized reading'  (duration: 283.216708ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:14.457731Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.679006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/sp-pod\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:14.457755Z","caller":"traceutil/trace.go:172","msg":"trace[1761726123] range","detail":"{range_begin:/registry/pods/default/sp-pod; range_end:; response_count:0; response_revision:750; }","duration":"154.705605ms","start":"2025-12-06T09:33:14.303042Z","end":"2025-12-06T09:33:14.457748Z","steps":["trace[1761726123] 'agreement among raft nodes before linearized reading'  (duration: 154.665398ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:33:14.457899Z","caller":"traceutil/trace.go:172","msg":"trace[826916383] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"357.317258ms","start":"2025-12-06T09:33:14.100576Z","end":"2025-12-06T09:33:14.457893Z","steps":["trace[826916383] 'process raft request'  (duration: 357.046679ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:14.458315Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:14.100555Z","time spent":"357.424353ms","remote":"127.0.0.1:59392","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:742 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-06T09:33:14.458438Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.329448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:14.458454Z","caller":"traceutil/trace.go:172","msg":"trace[1260011061] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:750; }","duration":"144.345593ms","start":"2025-12-06T09:33:14.314103Z","end":"2025-12-06T09:33:14.458448Z","steps":["trace[1260011061] 'agreement among raft nodes before linearized reading'  (duration: 144.313618ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.929431Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"464.371235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-06T09:33:16.929771Z","caller":"traceutil/trace.go:172","msg":"trace[1952817100] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:766; }","duration":"464.688704ms","start":"2025-12-06T09:33:16.465041Z","end":"2025-12-06T09:33:16.929729Z","steps":["trace[1952817100] 'range keys from in-memory index tree'  (duration: 464.189821ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930063Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:16.465022Z","time spent":"465.004104ms","remote":"127.0.0.1:59392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1139,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-12-06T09:33:16.930158Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"453.249911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:16.930184Z","caller":"traceutil/trace.go:172","msg":"trace[2139522797] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:766; }","duration":"453.277304ms","start":"2025-12-06T09:33:16.476900Z","end":"2025-12-06T09:33:16.930177Z","steps":["trace[2139522797] 'range keys from in-memory index tree'  (duration: 453.153922ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"359.437486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-06T09:33:16.930205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:16.476884Z","time spent":"453.312614ms","remote":"127.0.0.1:59438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:33:16.930214Z","caller":"traceutil/trace.go:172","msg":"trace[1289543642] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:766; }","duration":"359.45573ms","start":"2025-12-06T09:33:16.570753Z","end":"2025-12-06T09:33:16.930208Z","steps":["trace[1289543642] 'range keys from in-memory index tree'  (duration: 359.349019ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930230Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:16.570739Z","time spent":"359.487723ms","remote":"127.0.0.1:59438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-06T09:33:16.929660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.652836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-mount\" limit:1 ","response":"range_response_count:1 size:3258"}
	{"level":"info","ts":"2025-12-06T09:33:16.930291Z","caller":"traceutil/trace.go:172","msg":"trace[1950926214] range","detail":"{range_begin:/registry/pods/default/busybox-mount; range_end:; response_count:1; response_revision:766; }","duration":"168.316875ms","start":"2025-12-06T09:33:16.761970Z","end":"2025-12-06T09:33:16.930287Z","steps":["trace[1950926214] 'range keys from in-memory index tree'  (duration: 167.512563ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"388.676638ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:16.930401Z","caller":"traceutil/trace.go:172","msg":"trace[1014901869] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:766; }","duration":"388.701187ms","start":"2025-12-06T09:33:16.541695Z","end":"2025-12-06T09:33:16.930396Z","steps":["trace[1014901869] 'range keys from in-memory index tree'  (duration: 388.606192ms)"],"step_count":1}
	
	
	==> etcd [3a8e87a014af6bf12d4117ed4089faca383e7c48b3791baad8efc33075be15ca] <==
	{"level":"warn","ts":"2025-12-06T09:31:55.584044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.593708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.604658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.614277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.624270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.631273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.706311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41846","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:32:30.573461Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:32:30.573599Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-715379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.160:2380"],"advertise-client-urls":["https://192.168.39.160:2379"]}
	{"level":"error","ts":"2025-12-06T09:32:30.573884Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:32:30.577505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:32:30.577559Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:32:30.577652Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6b56431cc78e971c","current-leader-member-id":"6b56431cc78e971c"}
	{"level":"info","ts":"2025-12-06T09:32:30.577678Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T09:32:30.577696Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577756Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577805Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:32:30.577814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577845Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577852Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:32:30.577857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.160:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:32:30.581068Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"error","ts":"2025-12-06T09:32:30.581132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.160:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:32:30.581154Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"info","ts":"2025-12-06T09:32:30.581159Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-715379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.160:2380"],"advertise-client-urls":["https://192.168.39.160:2379"]}
	
	
	==> kernel <==
	 09:38:23 up 8 min,  0 users,  load average: 0.21, 0.34, 0.20
	Linux functional-715379 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b8c7b6eb44b2ec4d484fa6d548093a49ec990a958fbdb96afe52ad27e82779a8] <==
	I1206 09:32:40.300178       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 09:32:40.300206       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:32:40.300266       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:32:40.300276       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:32:40.478723       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:32:41.103610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1206 09:32:41.437821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160]
	I1206 09:32:41.439644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:32:41.449048       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:32:41.712264       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:32:41.751592       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:32:41.778221       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:32:41.788097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:32:43.920601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:32:56.871873       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.74.163"}
	I1206 09:33:00.868163       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.185.140"}
	I1206 09:33:03.271099       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.125.251"}
	I1206 09:33:11.561360       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.38.252"}
	E1206 09:33:22.472743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:48432: use of closed network connection
	I1206 09:33:23.583229       1 controller.go:667] quota admission added evaluator for: namespaces
	E1206 09:33:23.790140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:38270: use of closed network connection
	I1206 09:33:23.989714       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.177.204"}
	I1206 09:33:24.024024       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.44.108"}
	E1206 09:33:25.204487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:38288: use of closed network connection
	E1206 09:33:26.735859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:38306: use of closed network connection
	
	
	==> kube-controller-manager [5534747c5e4a1116ef2765bb1ce89014744ff2be2bc22c754cca0bde525a4adb] <==
	I1206 09:32:43.569995       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-715379"
	I1206 09:32:43.570044       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:32:43.570319       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:32:43.573589       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:32:43.575008       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:32:43.575591       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:32:43.575904       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:32:43.579330       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:32:43.584868       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:32:43.587269       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:32:43.595515       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:32:43.598059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:32:43.600227       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:32:43.602515       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:32:43.606964       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:32:43.616959       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:32:43.617027       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:32:43.618310       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:32:43.619420       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1206 09:33:23.780847       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.802589       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.808212       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.822851       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.825561       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.842411       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e9812d4774ff159457f5f89b2bffe3e352ff06791467242b722d0b7107e96b3e] <==
	I1206 09:31:59.756838       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:31:59.756888       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:31:59.756987       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:31:59.757041       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:31:59.757467       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:31:59.757520       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:31:59.757549       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:31:59.765102       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:31:59.765140       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:31:59.765166       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:31:59.765183       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:31:59.765189       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:31:59.765352       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:31:59.769679       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:31:59.772673       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:31:59.776298       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:31:59.784572       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:31:59.787087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:31:59.789334       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:31:59.802899       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:31:59.805376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:31:59.805387       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:31:59.805391       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1206 09:32:29.765898       1 resource_quota_controller.go:446] "Unhandled Error" err="failed to discover resources: Get \"https://192.168.39.160:8441/api\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError"
	I1206 09:32:29.788395       1 garbagecollector.go:789] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.160:8441/api\": dial tcp 192.168.39.160:8441: connect: connection refused"
	
	
	==> kube-proxy [4ae120c0ef39885b56bf1ce1b1ec3f4ea1aca535f34cd34e85303e1fa27c2983] <==
	I1206 09:32:41.052636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:32:41.154893       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:32:41.157574       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.160"]
	E1206 09:32:41.159558       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:32:41.291129       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:32:41.291833       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:32:41.292044       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:32:41.320057       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:32:41.320340       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:32:41.320372       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:32:41.326661       1 config.go:200] "Starting service config controller"
	I1206 09:32:41.326700       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:32:41.326714       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:32:41.326718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:32:41.326725       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:32:41.326728       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:32:41.327494       1 config.go:309] "Starting node config controller"
	I1206 09:32:41.327523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:32:41.327529       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:32:41.427651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:32:41.427696       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:32:41.427715       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b70b38e2d71c3008cf21dcfb0222c38dd3315343d3aac4e8993f3e0904c73339] <==
	I1206 09:31:35.421258       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:31:35.522159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:31:35.522184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.160"]
	E1206 09:31:35.522268       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:31:35.558066       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:31:35.558127       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:31:35.558223       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:31:35.568307       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:31:35.568751       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:31:35.568779       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:31:35.573689       1 config.go:309] "Starting node config controller"
	I1206 09:31:35.573721       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:31:35.573727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:31:35.574285       1 config.go:200] "Starting service config controller"
	I1206 09:31:35.574318       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:31:35.574333       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:31:35.574337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:31:35.574396       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:31:35.574419       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:31:35.674576       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:31:35.674822       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:31:35.675024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [99f2a113f5da2f934fa8c61ae576a810a559c2a1c909fc7f117a910531abe997] <==
	E1206 09:31:56.369609       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:31:56.369621       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:31:56.369631       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:31:56.369642       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:31:56.373208       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:31:56.373261       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:31:56.373451       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:31:56.373487       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:31:56.373653       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:31:56.374989       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:31:56.375145       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:31:56.375237       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:31:56.375249       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:31:56.375478       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:31:56.375636       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:31:56.375719       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:31:56.375732       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:31:56.395675       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1206 09:32:30.640720       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:32:30.640782       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:32:30.640802       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:32:30.641024       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1206 09:32:30.641126       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:32:30.641314       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:32:30.641498       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb612a274e0afcc16d80f9034791117adaf12a71e06d30a42ecfca73ad94e768] <==
	E1206 09:32:34.395122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.160:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:32:34.422999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.160:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:32:34.498576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:32:34.506249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.160:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:32:35.868585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.160:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:32:35.934432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.160:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:32:35.971237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.160:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:32:36.100705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:32:36.149720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.160:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:32:36.161479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:32:36.259205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.160:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:32:36.319508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.160:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:32:36.336712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:32:36.522195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:32:36.625159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:32:36.715977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.160:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:32:36.797760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.160:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:32:36.819316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:32:37.095877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.160:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:32:37.229693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:32:37.276447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.160:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:32:37.346742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.160:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:32:37.539715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:32:40.172046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:32:47.523706       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:37:04 functional-715379 kubelet[5303]: E1206 09:37:04.479757    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:37:10 functional-715379 kubelet[5303]: E1206 09:37:10.480179    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:37:11 functional-715379 kubelet[5303]: E1206 09:37:11.479687    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:37:13 functional-715379 kubelet[5303]: E1206 09:37:13.482807    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:37:17 functional-715379 kubelet[5303]: E1206 09:37:17.479257    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:37:22 functional-715379 kubelet[5303]: E1206 09:37:22.480470    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:37:24 functional-715379 kubelet[5303]: E1206 09:37:24.479895    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:37:28 functional-715379 kubelet[5303]: E1206 09:37:28.479550    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:37:28 functional-715379 kubelet[5303]: E1206 09:37:28.481126    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:37:36 functional-715379 kubelet[5303]: E1206 09:37:36.479523    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:37:37 functional-715379 kubelet[5303]: E1206 09:37:37.481129    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:37:40 functional-715379 kubelet[5303]: E1206 09:37:40.479326    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:37:43 functional-715379 kubelet[5303]: E1206 09:37:43.481879    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:37:49 functional-715379 kubelet[5303]: E1206 09:37:49.481171    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:37:51 functional-715379 kubelet[5303]: E1206 09:37:51.480504    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:37:53 functional-715379 kubelet[5303]: E1206 09:37:53.479233    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:37:55 functional-715379 kubelet[5303]: E1206 09:37:55.480426    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:38:03 functional-715379 kubelet[5303]: E1206 09:38:03.480701    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:38:05 functional-715379 kubelet[5303]: E1206 09:38:05.480012    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:38:08 functional-715379 kubelet[5303]: E1206 09:38:08.479023    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:38:10 functional-715379 kubelet[5303]: E1206 09:38:10.479574    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:38:17 functional-715379 kubelet[5303]: E1206 09:38:17.480535    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:38:19 functional-715379 kubelet[5303]: E1206 09:38:19.479112    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:38:20 functional-715379 kubelet[5303]: E1206 09:38:20.479549    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:38:22 functional-715379 kubelet[5303]: E1206 09:38:22.480521    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	
	
	==> storage-provisioner [839564bcdbe311c10548583f3e612b7d2577aec0f2fda27317620b2589a32398] <==
	I1206 09:32:40.961846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:32:40.963723       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f91e424d917601f86d02811bac41868cdd9e38e8142396247d355b4b12508629] <==
	W1206 09:37:58.447591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:00.451424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:00.456733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:02.460323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:02.468504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:04.471622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:04.477564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:06.481051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:06.486416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:08.489572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:08.494618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:10.497600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:10.507252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:12.510875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:12.515992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:14.519401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:14.524351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:16.527605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:16.533514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:18.537874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:18.543091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:20.545808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:20.550141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:22.553235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:22.558257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-715379 -n functional-715379
helpers_test.go:269: (dbg) Run:  kubectl --context functional-715379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-7d85dfc575-9trdj sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-715379 describe pod busybox-mount hello-node-connect-7d85dfc575-9trdj sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-715379 describe pod busybox-mount hello-node-connect-7d85dfc575-9trdj sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4: exit status 1 (84.958119ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-715379/192.168.39.160
	Start Time:       Sat, 06 Dec 2025 09:33:11 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://3934e08645b64bcf347af631d06c9ab87c0db3802ea6ce5e335144dc3ffa96dd
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:33:16 +0000
	      Finished:     Sat, 06 Dec 2025 09:33:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wplkx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wplkx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-715379
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m8s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 755ms (4.39s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m8s   kubelet            Created container: mount-munger
	  Normal  Started    5m8s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-9trdj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-715379/192.168.39.160
	Start Time:       Sat, 06 Dec 2025 09:33:11 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dtf5j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dtf5j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m13s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9trdj to functional-715379
	  Normal   Pulling    2m20s (x5 over 5m12s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m19s (x5 over 5m7s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m19s (x5 over 5m7s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x19 over 5m7s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5s (x19 over 5m7s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-715379/192.168.39.160
	Start Time:       Sat, 06 Dec 2025 09:33:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4mv9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p4mv9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m10s                 default-scheduler  Successfully assigned default/sp-pod to functional-715379
	  Warning  Failed     3m43s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m11s (x5 over 5m9s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m10s (x4 over 5m6s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m10s (x5 over 5m6s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x20 over 5m6s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4s (x20 over 5m6s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-2jpwj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-pfrh4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-715379 describe pod busybox-mount hello-node-connect-7d85dfc575-9trdj sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.99s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (375.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [0760bca6-e290-4784-90e4-ad93c0f0b55e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004508438s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-715379 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-715379 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-715379 get pvc myclaim -o=json
I1206 09:33:06.979804  387687 retry.go:31] will retry after 2.619601658s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:ad968170-d29d-4c38-8eaa-cea6dbf64964 ResourceVersion:712 Generation:0 CreationTimestamp:2025-12-06 09:33:06 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001914f60 VolumeMode:0xc001914f70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-715379 get pvc myclaim -o=json
I1206 09:33:09.667948  387687 retry.go:31] will retry after 4.343691027s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:ad968170-d29d-4c38-8eaa-cea6dbf64964 ResourceVersion:712 Generation:0 CreationTimestamp:2025-12-06 09:33:06 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001915ad0 VolumeMode:0xc001915ae0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-715379 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-715379 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:33:14.481132  387687 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [18a069e9-182a-451d-9e17-f139f86fd0bd] Pending
helpers_test.go:352: "sp-pod" [18a069e9-182a-451d-9e17-f139f86fd0bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-715379 -n functional-715379
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-06 09:39:14.705890056 +0000 UTC m=+1727.993267188
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-715379 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-715379 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-715379/192.168.39.160
Start Time:       Sat, 06 Dec 2025 09:33:14 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4mv9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-p4mv9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-715379
Warning  Failed     4m33s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m1s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m (x4 over 5m56s)    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m (x5 over 5m56s)    kubelet            Error: ErrImagePull
Warning  Failed     54s (x20 over 5m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    43s (x21 over 5m56s)  kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-715379 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-715379 logs sp-pod -n default: exit status 1 (71.960513ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-715379 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-715379 -n functional-715379
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 logs -n 25: (1.215017844s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-715379 ssh findmnt -T /mount-9p | grep 9p                                                                            │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdspecific-port61418453/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ ssh            │ functional-715379 ssh findmnt -T /mount-9p | grep 9p                                                                            │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh -- ls -la /mount-9p                                                                                       │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh sudo umount -f /mount-9p                                                                                  │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount2 --alsologtostderr -v=1              │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount1 --alsologtostderr -v=1              │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ mount          │ -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount3 --alsologtostderr -v=1              │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ ssh            │ functional-715379 ssh findmnt -T /mount1                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ ssh            │ functional-715379 ssh findmnt -T /mount1                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh findmnt -T /mount2                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh findmnt -T /mount3                                                                                        │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ mount          │ -p functional-715379 --kill=true                                                                                                │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-715379 --alsologtostderr -v=1                                                                  │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ update-context │ functional-715379 update-context --alsologtostderr -v=2                                                                         │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-715379 update-context --alsologtostderr -v=2                                                                         │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-715379 update-context --alsologtostderr -v=2                                                                         │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format short --alsologtostderr                                                                     │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format json --alsologtostderr                                                                      │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format table --alsologtostderr                                                                     │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls --format yaml --alsologtostderr                                                                      │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-715379 ssh pgrep buildkitd                                                                                           │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ image          │ functional-715379 image build -t localhost/my-image:functional-715379 testdata/build --alsologtostderr                          │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-715379 image ls                                                                                                      │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ service        │ functional-715379 service hello-node-connect --url                                                                              │ functional-715379 │ jenkins │ v1.37.0 │ 06 Dec 25 09:38 UTC │ 06 Dec 25 09:38 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:33:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:33:10.718401  399186 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:33:10.718506  399186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.718515  399186 out.go:374] Setting ErrFile to fd 2...
	I1206 09:33:10.718522  399186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.718974  399186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:33:10.719531  399186 out.go:368] Setting JSON to false
	I1206 09:33:10.720768  399186 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8141,"bootTime":1765005450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:33:10.720836  399186 start.go:143] virtualization: kvm guest
	I1206 09:33:10.722601  399186 out.go:179] * [functional-715379] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:33:10.724144  399186 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:33:10.724163  399186 notify.go:221] Checking for updates...
	I1206 09:33:10.726252  399186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:33:10.727407  399186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:33:10.728380  399186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:33:10.729438  399186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:33:10.730564  399186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:33:10.732295  399186 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:33:10.732971  399186 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:33:10.766503  399186 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:33:10.768236  399186 start.go:309] selected driver: kvm2
	I1206 09:33:10.768258  399186 start.go:927] validating driver "kvm2" against &{Name:functional-715379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-715379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:33:10.768384  399186 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:33:10.770464  399186 out.go:203] 
	W1206 09:33:10.771650  399186 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:33:10.772738  399186 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	89958adcbb058       9056ab77afb8e       26 seconds ago      Running             echo-server               0                   4b0f64ba6803f       hello-node-connect-7d85dfc575-9trdj         default
	3934e08645b64       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   83334eaba984d       busybox-mount                               default
	39da92c476218       5107333e08a87       6 minutes ago       Running             mysql                     0                   536c9cd032d24       mysql-5bb876957f-lv58m                      default
	aaf4a154d4a0e       9056ab77afb8e       6 minutes ago       Running             echo-server               0                   9f8de906c491b       hello-node-75c85bcc94-pwg8d                 default
	f91e424d91760       6e38f40d628db       6 minutes ago       Running             storage-provisioner       4                   40743e1a40542       storage-provisioner                         kube-system
	839564bcdbe31       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       3                   40743e1a40542       storage-provisioner                         kube-system
	e95fff93d800e       52546a367cc9e       6 minutes ago       Running             coredns                   2                   3473253f1c11f       coredns-66bc5c9577-w4crl                    kube-system
	4ae120c0ef398       8aa150647e88a       6 minutes ago       Running             kube-proxy                2                   1abd2d5923be8       kube-proxy-nscwz                            kube-system
	b8c7b6eb44b2e       a5f569d49a979       6 minutes ago       Running             kube-apiserver            0                   261c4aae752cd       kube-apiserver-functional-715379            kube-system
	5534747c5e4a1       01e8bacf0f500       6 minutes ago       Running             kube-controller-manager   2                   defe7964445a7       kube-controller-manager-functional-715379   kube-system
	bb612a274e0af       88320b5498ff2       6 minutes ago       Running             kube-scheduler            2                   6920096e7b156       kube-scheduler-functional-715379            kube-system
	1bb47db05c08b       a3e246e9556e9       6 minutes ago       Running             etcd                      2                   3191fa856ea72       etcd-functional-715379                      kube-system
	e9812d4774ff1       01e8bacf0f500       7 minutes ago       Exited              kube-controller-manager   1                   defe7964445a7       kube-controller-manager-functional-715379   kube-system
	3a8e87a014af6       a3e246e9556e9       7 minutes ago       Exited              etcd                      1                   3191fa856ea72       etcd-functional-715379                      kube-system
	99f2a113f5da2       88320b5498ff2       7 minutes ago       Exited              kube-scheduler            1                   6920096e7b156       kube-scheduler-functional-715379            kube-system
	ffe447f98c0e8       52546a367cc9e       7 minutes ago       Exited              coredns                   1                   3473253f1c11f       coredns-66bc5c9577-w4crl                    kube-system
	b70b38e2d71c3       8aa150647e88a       7 minutes ago       Exited              kube-proxy                1                   1abd2d5923be8       kube-proxy-nscwz                            kube-system
	
	
	==> containerd <==
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.168355499Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.169607014Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=12115"
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.171417236Z" level=info msg="ImageUpdate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.174415535Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.175252521Z" level=info msg="Pulled image \"kicbase/echo-server:latest\" with image id \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\", repo tag \"docker.io/kicbase/echo-server:latest\", repo digest \"docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6\", size \"2138418\" in 693.847446ms"
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.175282410Z" level=info msg="PullImage \"kicbase/echo-server:latest\" returns image reference \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.182823298Z" level=info msg="CreateContainer within sandbox \"4b0f64ba6803f6e7b27c3c07dfcbbed8cd68599648e5e1eaf0e33d9274ad77e9\" for container &ContainerMetadata{Name:echo-server,Attempt:0,}"
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.201401035Z" level=info msg="CreateContainer within sandbox \"4b0f64ba6803f6e7b27c3c07dfcbbed8cd68599648e5e1eaf0e33d9274ad77e9\" for &ContainerMetadata{Name:echo-server,Attempt:0,} returns container id \"89958adcbb0580cd3c9703568490f964e6fc3209157e3e931d86bbb5d0fdfcee\""
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.203853107Z" level=info msg="StartContainer for \"89958adcbb0580cd3c9703568490f964e6fc3209157e3e931d86bbb5d0fdfcee\""
	Dec 06 09:38:49 functional-715379 containerd[4525]: time="2025-12-06T09:38:49.264706647Z" level=info msg="StartContainer for \"89958adcbb0580cd3c9703568490f964e6fc3209157e3e931d86bbb5d0fdfcee\" returns successfully"
	Dec 06 09:38:59 functional-715379 containerd[4525]: time="2025-12-06T09:38:59.482976644Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 06 09:38:59 functional-715379 containerd[4525]: time="2025-12-06T09:38:59.485272763Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:38:59 functional-715379 containerd[4525]: time="2025-12-06T09:38:59.745663024Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:39:00 functional-715379 containerd[4525]: time="2025-12-06T09:39:00.405191731Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:39:00 functional-715379 containerd[4525]: time="2025-12-06T09:39:00.405286831Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Dec 06 09:39:07 functional-715379 containerd[4525]: time="2025-12-06T09:39:07.481423437Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Dec 06 09:39:07 functional-715379 containerd[4525]: time="2025-12-06T09:39:07.485020112Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:39:07 functional-715379 containerd[4525]: time="2025-12-06T09:39:07.733450807Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:39:08 functional-715379 containerd[4525]: time="2025-12-06T09:39:08.398719218Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:39:08 functional-715379 containerd[4525]: time="2025-12-06T09:39:08.398814898Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Dec 06 09:39:09 functional-715379 containerd[4525]: time="2025-12-06T09:39:09.481403431Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 06 09:39:09 functional-715379 containerd[4525]: time="2025-12-06T09:39:09.484167945Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:39:09 functional-715379 containerd[4525]: time="2025-12-06T09:39:09.757421693Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:39:10 functional-715379 containerd[4525]: time="2025-12-06T09:39:10.404664635Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:39:10 functional-715379 containerd[4525]: time="2025-12-06T09:39:10.404777774Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	
	
	==> coredns [e95fff93d800ebb4020c57425eff27d077155035afe580120deb1cf0ffa236b9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52569 - 49207 "HINFO IN 2592115185418839874.7134750291161745495. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.08797264s
	
	
	==> coredns [ffe447f98c0e8bf1f37fe60dfda5c2b53943736e141bf7a457b99efad7d86db8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46473 - 11742 "HINFO IN 6274865240676304577.1924407954027326078. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.160460036s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=461": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-715379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-715379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-715379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:30:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-715379
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:39:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:39:09 +0000   Sat, 06 Dec 2025 09:30:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:39:09 +0000   Sat, 06 Dec 2025 09:30:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:39:09 +0000   Sat, 06 Dec 2025 09:30:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:39:09 +0000   Sat, 06 Dec 2025 09:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    functional-715379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 522886bcddf24fd391c46bbe382f6c4d
	  System UUID:                522886bc-ddf2-4fd3-91c4-6bbe382f6c4d
	  Boot ID:                    16e62390-a490-422d-a0eb-821ce79700c3
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-pwg8d                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  default                     hello-node-connect-7d85dfc575-9trdj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  default                     mysql-5bb876957f-lv58m                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m12s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-w4crl                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m25s
	  kube-system                 etcd-functional-715379                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m32s
	  kube-system                 kube-apiserver-functional-715379              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-functional-715379     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-proxy-nscwz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-scheduler-functional-715379              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2jpwj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pfrh4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m34s                  kube-proxy       
	  Normal  Starting                 7m40s                  kube-proxy       
	  Normal  Starting                 8m23s                  kube-proxy       
	  Normal  Starting                 8m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s                  kubelet          Node functional-715379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s                  kubelet          Node functional-715379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s                  kubelet          Node functional-715379 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m30s                  kubelet          Node functional-715379 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     8m26s                  cidrAllocator    Node functional-715379 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           8m26s                  node-controller  Node functional-715379 event: Registered Node functional-715379 in Controller
	  Normal  Starting                 7m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m21s (x8 over 7m21s)  kubelet          Node functional-715379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s (x8 over 7m21s)  kubelet          Node functional-715379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m21s (x7 over 7m21s)  kubelet          Node functional-715379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m16s                  node-controller  Node functional-715379 event: Registered Node functional-715379 in Controller
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node functional-715379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node functional-715379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x7 over 6m38s)  kubelet          Node functional-715379 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m32s                  node-controller  Node functional-715379 event: Registered Node functional-715379 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.079835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100446] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.090420] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.116265] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.182896] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:31] kauditd_printk_skb: 265 callbacks suppressed
	[ +21.604799] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.872374] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.064294] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.790011] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.933410] kauditd_printk_skb: 28 callbacks suppressed
	[Dec 6 09:32] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.118680] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.917854] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.248432] kauditd_printk_skb: 63 callbacks suppressed
	[  +3.519669] kauditd_printk_skb: 59 callbacks suppressed
	[  +4.120396] kauditd_printk_skb: 55 callbacks suppressed
	[Dec 6 09:33] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.000080] kauditd_printk_skb: 120 callbacks suppressed
	[  +2.950233] kauditd_printk_skb: 147 callbacks suppressed
	[  +5.127785] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.226026] crun[8460]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.722275] kauditd_printk_skb: 86 callbacks suppressed
	[Dec 6 09:38] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [1bb47db05c08beebc85ace42b7d7d0a8e026be57973b887bd7cdce59546987b8] <==
	{"level":"info","ts":"2025-12-06T09:33:11.085434Z","caller":"traceutil/trace.go:172","msg":"trace[471396808] transaction","detail":"{read_only:false; response_revision:716; number_of_response:1; }","duration":"179.446366ms","start":"2025-12-06T09:33:10.905979Z","end":"2025-12-06T09:33:11.085425Z","steps":["trace[471396808] 'process raft request'  (duration: 179.370948ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:12.604056Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"289.547841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:12.604280Z","caller":"traceutil/trace.go:172","msg":"trace[1297274851] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:749; }","duration":"290.605959ms","start":"2025-12-06T09:33:12.313663Z","end":"2025-12-06T09:33:12.604269Z","steps":["trace[1297274851] 'range keys from in-memory index tree'  (duration: 289.501528ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:33:14.457341Z","caller":"traceutil/trace.go:172","msg":"trace[1674933141] linearizableReadLoop","detail":"{readStateIndex:831; appliedIndex:831; }","duration":"283.139177ms","start":"2025-12-06T09:33:14.174187Z","end":"2025-12-06T09:33:14.457326Z","steps":["trace[1674933141] 'read index received'  (duration: 283.134148ms)","trace[1674933141] 'applied index is now lower than readState.Index'  (duration: 4.48µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:33:14.457443Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.240945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:14.457459Z","caller":"traceutil/trace.go:172","msg":"trace[1994588624] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:749; }","duration":"283.270917ms","start":"2025-12-06T09:33:14.174183Z","end":"2025-12-06T09:33:14.457454Z","steps":["trace[1994588624] 'agreement among raft nodes before linearized reading'  (duration: 283.216708ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:14.457731Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.679006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/sp-pod\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:14.457755Z","caller":"traceutil/trace.go:172","msg":"trace[1761726123] range","detail":"{range_begin:/registry/pods/default/sp-pod; range_end:; response_count:0; response_revision:750; }","duration":"154.705605ms","start":"2025-12-06T09:33:14.303042Z","end":"2025-12-06T09:33:14.457748Z","steps":["trace[1761726123] 'agreement among raft nodes before linearized reading'  (duration: 154.665398ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:33:14.457899Z","caller":"traceutil/trace.go:172","msg":"trace[826916383] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"357.317258ms","start":"2025-12-06T09:33:14.100576Z","end":"2025-12-06T09:33:14.457893Z","steps":["trace[826916383] 'process raft request'  (duration: 357.046679ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:14.458315Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:14.100555Z","time spent":"357.424353ms","remote":"127.0.0.1:59392","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:742 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-06T09:33:14.458438Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.329448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:14.458454Z","caller":"traceutil/trace.go:172","msg":"trace[1260011061] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:750; }","duration":"144.345593ms","start":"2025-12-06T09:33:14.314103Z","end":"2025-12-06T09:33:14.458448Z","steps":["trace[1260011061] 'agreement among raft nodes before linearized reading'  (duration: 144.313618ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.929431Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"464.371235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-06T09:33:16.929771Z","caller":"traceutil/trace.go:172","msg":"trace[1952817100] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:766; }","duration":"464.688704ms","start":"2025-12-06T09:33:16.465041Z","end":"2025-12-06T09:33:16.929729Z","steps":["trace[1952817100] 'range keys from in-memory index tree'  (duration: 464.189821ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930063Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:16.465022Z","time spent":"465.004104ms","remote":"127.0.0.1:59392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1139,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-12-06T09:33:16.930158Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"453.249911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:16.930184Z","caller":"traceutil/trace.go:172","msg":"trace[2139522797] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:766; }","duration":"453.277304ms","start":"2025-12-06T09:33:16.476900Z","end":"2025-12-06T09:33:16.930177Z","steps":["trace[2139522797] 'range keys from in-memory index tree'  (duration: 453.153922ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"359.437486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-06T09:33:16.930205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:16.476884Z","time spent":"453.312614ms","remote":"127.0.0.1:59438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-06T09:33:16.930214Z","caller":"traceutil/trace.go:172","msg":"trace[1289543642] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:766; }","duration":"359.45573ms","start":"2025-12-06T09:33:16.570753Z","end":"2025-12-06T09:33:16.930208Z","steps":["trace[1289543642] 'range keys from in-memory index tree'  (duration: 359.349019ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930230Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:33:16.570739Z","time spent":"359.487723ms","remote":"127.0.0.1:59438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-06T09:33:16.929660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.652836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-mount\" limit:1 ","response":"range_response_count:1 size:3258"}
	{"level":"info","ts":"2025-12-06T09:33:16.930291Z","caller":"traceutil/trace.go:172","msg":"trace[1950926214] range","detail":"{range_begin:/registry/pods/default/busybox-mount; range_end:; response_count:1; response_revision:766; }","duration":"168.316875ms","start":"2025-12-06T09:33:16.761970Z","end":"2025-12-06T09:33:16.930287Z","steps":["trace[1950926214] 'range keys from in-memory index tree'  (duration: 167.512563ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:33:16.930385Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"388.676638ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:33:16.930401Z","caller":"traceutil/trace.go:172","msg":"trace[1014901869] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:766; }","duration":"388.701187ms","start":"2025-12-06T09:33:16.541695Z","end":"2025-12-06T09:33:16.930396Z","steps":["trace[1014901869] 'range keys from in-memory index tree'  (duration: 388.606192ms)"],"step_count":1}
	
	
	==> etcd [3a8e87a014af6bf12d4117ed4089faca383e7c48b3791baad8efc33075be15ca] <==
	{"level":"warn","ts":"2025-12-06T09:31:55.584044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.593708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.604658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.614277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.624270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.631273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:31:55.706311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41846","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:32:30.573461Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:32:30.573599Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-715379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.160:2380"],"advertise-client-urls":["https://192.168.39.160:2379"]}
	{"level":"error","ts":"2025-12-06T09:32:30.573884Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:32:30.577505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:32:30.577559Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:32:30.577652Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6b56431cc78e971c","current-leader-member-id":"6b56431cc78e971c"}
	{"level":"info","ts":"2025-12-06T09:32:30.577678Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T09:32:30.577696Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577756Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577805Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:32:30.577814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577845Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:32:30.577852Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.160:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:32:30.577857Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.160:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:32:30.581068Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"error","ts":"2025-12-06T09:32:30.581132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.160:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:32:30.581154Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.160:2380"}
	{"level":"info","ts":"2025-12-06T09:32:30.581159Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-715379","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.160:2380"],"advertise-client-urls":["https://192.168.39.160:2379"]}
	
	
	==> kernel <==
	 09:39:15 up 9 min,  0 users,  load average: 0.65, 0.40, 0.22
	Linux functional-715379 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b8c7b6eb44b2ec4d484fa6d548093a49ec990a958fbdb96afe52ad27e82779a8] <==
	I1206 09:32:40.300178       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 09:32:40.300206       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:32:40.300266       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:32:40.300276       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:32:40.478723       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:32:41.103610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1206 09:32:41.437821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160]
	I1206 09:32:41.439644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:32:41.449048       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:32:41.712264       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:32:41.751592       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:32:41.778221       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:32:41.788097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:32:43.920601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:32:56.871873       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.74.163"}
	I1206 09:33:00.868163       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.185.140"}
	I1206 09:33:03.271099       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.125.251"}
	I1206 09:33:11.561360       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.38.252"}
	E1206 09:33:22.472743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:48432: use of closed network connection
	I1206 09:33:23.583229       1 controller.go:667] quota admission added evaluator for: namespaces
	E1206 09:33:23.790140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:38270: use of closed network connection
	I1206 09:33:23.989714       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.177.204"}
	I1206 09:33:24.024024       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.44.108"}
	E1206 09:33:25.204487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:38288: use of closed network connection
	E1206 09:33:26.735859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.160:8441->192.168.39.1:38306: use of closed network connection
	
	
	==> kube-controller-manager [5534747c5e4a1116ef2765bb1ce89014744ff2be2bc22c754cca0bde525a4adb] <==
	I1206 09:32:43.569995       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-715379"
	I1206 09:32:43.570044       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:32:43.570319       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:32:43.573589       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:32:43.575008       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:32:43.575591       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:32:43.575904       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:32:43.579330       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:32:43.584868       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:32:43.587269       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:32:43.595515       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:32:43.598059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:32:43.600227       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:32:43.602515       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:32:43.606964       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:32:43.616959       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:32:43.617027       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:32:43.618310       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:32:43.619420       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1206 09:33:23.780847       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.802589       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.808212       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.822851       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.825561       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:33:23.842411       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e9812d4774ff159457f5f89b2bffe3e352ff06791467242b722d0b7107e96b3e] <==
	I1206 09:31:59.756838       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:31:59.756888       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 09:31:59.756987       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:31:59.757041       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:31:59.757467       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:31:59.757520       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:31:59.757549       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:31:59.765102       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:31:59.765140       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 09:31:59.765166       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 09:31:59.765183       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 09:31:59.765189       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 09:31:59.765352       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 09:31:59.769679       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:31:59.772673       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:31:59.776298       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:31:59.784572       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:31:59.787087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:31:59.789334       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:31:59.802899       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:31:59.805376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:31:59.805387       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:31:59.805391       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1206 09:32:29.765898       1 resource_quota_controller.go:446] "Unhandled Error" err="failed to discover resources: Get \"https://192.168.39.160:8441/api\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError"
	I1206 09:32:29.788395       1 garbagecollector.go:789] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.160:8441/api\": dial tcp 192.168.39.160:8441: connect: connection refused"
	
	
	==> kube-proxy [4ae120c0ef39885b56bf1ce1b1ec3f4ea1aca535f34cd34e85303e1fa27c2983] <==
	I1206 09:32:41.052636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:32:41.154893       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:32:41.157574       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.160"]
	E1206 09:32:41.159558       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:32:41.291129       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:32:41.291833       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:32:41.292044       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:32:41.320057       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:32:41.320340       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:32:41.320372       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:32:41.326661       1 config.go:200] "Starting service config controller"
	I1206 09:32:41.326700       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:32:41.326714       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:32:41.326718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:32:41.326725       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:32:41.326728       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:32:41.327494       1 config.go:309] "Starting node config controller"
	I1206 09:32:41.327523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:32:41.327529       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:32:41.427651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:32:41.427696       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:32:41.427715       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b70b38e2d71c3008cf21dcfb0222c38dd3315343d3aac4e8993f3e0904c73339] <==
	I1206 09:31:35.421258       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:31:35.522159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:31:35.522184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.160"]
	E1206 09:31:35.522268       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:31:35.558066       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:31:35.558127       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:31:35.558223       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:31:35.568307       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:31:35.568751       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:31:35.568779       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:31:35.573689       1 config.go:309] "Starting node config controller"
	I1206 09:31:35.573721       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:31:35.573727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:31:35.574285       1 config.go:200] "Starting service config controller"
	I1206 09:31:35.574318       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:31:35.574333       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:31:35.574337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:31:35.574396       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:31:35.574419       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:31:35.674576       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:31:35.674822       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:31:35.675024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [99f2a113f5da2f934fa8c61ae576a810a559c2a1c909fc7f117a910531abe997] <==
	E1206 09:31:56.369609       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:31:56.369621       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:31:56.369631       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:31:56.369642       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:31:56.373208       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:31:56.373261       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:31:56.373451       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:31:56.373487       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:31:56.373653       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:31:56.374989       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:31:56.375145       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:31:56.375237       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:31:56.375249       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:31:56.375478       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:31:56.375636       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:31:56.375719       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:31:56.375732       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:31:56.395675       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1206 09:32:30.640720       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:32:30.640782       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:32:30.640802       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:32:30.641024       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1206 09:32:30.641126       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:32:30.641314       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:32:30.641498       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb612a274e0afcc16d80f9034791117adaf12a71e06d30a42ecfca73ad94e768] <==
	E1206 09:32:34.395122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.160:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:32:34.422999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.160:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:32:34.498576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:32:34.506249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.160:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:32:35.868585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.160:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:32:35.934432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.160:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:32:35.971237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.160:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:32:36.100705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:32:36.149720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.160:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:32:36.161479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:32:36.259205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.160:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:32:36.319508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.160:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:32:36.336712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:32:36.522195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:32:36.625159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:32:36.715977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.160:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:32:36.797760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.160:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:32:36.819316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:32:37.095877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.160:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:32:37.229693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.160:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:32:37.276447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.160:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:32:37.346742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.160:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:32:37.539715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.160:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.160:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:32:40.172046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:32:47.523706       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:38:19 functional-715379 kubelet[5303]: E1206 09:38:19.479112    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:38:20 functional-715379 kubelet[5303]: E1206 09:38:20.479549    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:38:22 functional-715379 kubelet[5303]: E1206 09:38:22.480521    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:38:28 functional-715379 kubelet[5303]: E1206 09:38:28.480326    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:38:31 functional-715379 kubelet[5303]: E1206 09:38:31.480162    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:38:34 functional-715379 kubelet[5303]: E1206 09:38:34.480820    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-9trdj" podUID="d6f4e72e-34af-40e7-a143-4075702d48de"
	Dec 06 09:38:37 functional-715379 kubelet[5303]: E1206 09:38:37.482478    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:38:43 functional-715379 kubelet[5303]: E1206 09:38:43.480385    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:38:46 functional-715379 kubelet[5303]: E1206 09:38:46.479976    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:38:52 functional-715379 kubelet[5303]: E1206 09:38:52.480033    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:38:58 functional-715379 kubelet[5303]: E1206 09:38:58.479569    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8
836aa230"
	Dec 06 09:39:00 functional-715379 kubelet[5303]: E1206 09:39:00.405435    5303 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:39:00 functional-715379 kubelet[5303]: E1206 09:39:00.405546    5303 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:39:00 functional-715379 kubelet[5303]: E1206 09:39:00.405645    5303 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(18a069e9-182a-451d-9e17-f139f86fd0bd): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:39:00 functional-715379 kubelet[5303]: E1206 09:39:00.405682    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:39:08 functional-715379 kubelet[5303]: E1206 09:39:08.399129    5303 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:39:08 functional-715379 kubelet[5303]: E1206 09:39:08.399207    5303 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:39:08 functional-715379 kubelet[5303]: E1206 09:39:08.399299    5303 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-pfrh4_kubernetes-dashboard(5b229716-218c-41ab-bb51-c681537d721b): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:39:08 functional-715379 kubelet[5303]: E1206 09:39:08.399341    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pfrh4" podUID="5b229716-218c-41ab-bb51-c681537d721b"
	Dec 06 09:39:10 functional-715379 kubelet[5303]: E1206 09:39:10.405073    5303 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:39:10 functional-715379 kubelet[5303]: E1206 09:39:10.405141    5303 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:39:10 functional-715379 kubelet[5303]: E1206 09:39:10.405220    5303 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj_kubernetes-dashboard(4b32d69a-f564-4767-8a6d-66e8836aa230): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:39:10 functional-715379 kubelet[5303]: E1206 09:39:10.405252    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2jpwj" podUID="4b32d69a-f564-4767-8a6d-66e8836aa230"
	Dec 06 09:39:11 functional-715379 kubelet[5303]: E1206 09:39:11.482218    5303 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="18a069e9-182a-451d-9e17-f139f86fd0bd"
	Dec 06 09:39:11 functional-715379 kubelet[5303]: I1206 09:39:11.494170    5303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-7d85dfc575-9trdj" podStartSLOduration=23.359143459 podStartE2EDuration="6m0.494153927s" podCreationTimestamp="2025-12-06 09:33:11 +0000 UTC" firstStartedPulling="2025-12-06 09:33:12.041536503 +0000 UTC m=+34.695711939" lastFinishedPulling="2025-12-06 09:38:49.17654697 +0000 UTC m=+371.830722407" observedRunningTime="2025-12-06 09:38:49.665504907 +0000 UTC m=+372.319680362" watchObservedRunningTime="2025-12-06 09:39:11.494153927 +0000 UTC m=+394.148329383"
	
	
	==> storage-provisioner [839564bcdbe311c10548583f3e612b7d2577aec0f2fda27317620b2589a32398] <==
	I1206 09:32:40.961846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:32:40.963723       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f91e424d917601f86d02811bac41868cdd9e38e8142396247d355b4b12508629] <==
	W1206 09:38:50.715617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:52.718094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:52.726608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:54.730017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:54.735467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:56.738220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:56.744096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:58.747630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:38:58.752686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:00.756274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:00.760746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:02.765418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:02.770199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:04.773969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:04.778662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:06.781590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:06.788510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:08.792783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:08.798603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:10.801750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:10.806771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:12.809746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:12.815886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:14.825832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:39:14.835061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-715379 -n functional-715379
helpers_test.go:269: (dbg) Run:  kubectl --context functional-715379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-715379 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-715379 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4: exit status 1 (79.611342ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-715379/192.168.39.160
	Start Time:       Sat, 06 Dec 2025 09:33:11 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://3934e08645b64bcf347af631d06c9ab87c0db3802ea6ce5e335144dc3ffa96dd
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:33:16 +0000
	      Finished:     Sat, 06 Dec 2025 09:33:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wplkx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wplkx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m5s  default-scheduler  Successfully assigned default/busybox-mount to functional-715379
	  Normal  Pulling    6m5s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 755ms (4.39s including waiting). Image size: 2395207 bytes.
	  Normal  Created    6m    kubelet            Created container: mount-munger
	  Normal  Started    6m    kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-715379/192.168.39.160
	Start Time:       Sat, 06 Dec 2025 09:33:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4mv9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p4mv9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-715379
	  Warning  Failed     4m35s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m3s (x5 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m2s (x4 over 5m58s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m2s (x5 over 5m58s)  kubelet            Error: ErrImagePull
	  Warning  Failed     56s (x20 over 5m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    45s (x21 over 5m58s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-2jpwj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-pfrh4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-715379 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-2jpwj kubernetes-dashboard-855c9754f9-pfrh4: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (375.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-878866 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-878866 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-878866 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-878866 --alsologtostderr -v=1] stderr:
I1206 09:42:24.131520  403288 out.go:360] Setting OutFile to fd 1 ...
I1206 09:42:24.131789  403288 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:42:24.131799  403288 out.go:374] Setting ErrFile to fd 2...
I1206 09:42:24.131803  403288 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:42:24.132036  403288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:42:24.132274  403288 mustload.go:66] Loading cluster: functional-878866
I1206 09:42:24.132629  403288 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:42:24.134614  403288 host.go:66] Checking if "functional-878866" exists ...
I1206 09:42:24.134907  403288 api_server.go:166] Checking apiserver status ...
I1206 09:42:24.134948  403288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:42:24.137576  403288 main.go:143] libmachine: domain functional-878866 has defined MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:42:24.137960  403288 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:ce:27", ip: ""} in network mk-functional-878866: {Iface:virbr1 ExpiryTime:2025-12-06 10:39:33 +0000 UTC Type:0 Mac:52:54:00:06:ce:27 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-878866 Clientid:01:52:54:00:06:ce:27}
I1206 09:42:24.137988  403288 main.go:143] libmachine: domain functional-878866 has defined IP address 192.168.39.195 and MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:42:24.138132  403288 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-878866/id_rsa Username:docker}
I1206 09:42:24.245189  403288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5472/cgroup
W1206 09:42:24.256980  403288 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5472/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 09:42:24.257039  403288 ssh_runner.go:195] Run: ls
I1206 09:42:24.264291  403288 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8441/healthz ...
I1206 09:42:24.270690  403288 api_server.go:279] https://192.168.39.195:8441/healthz returned 200:
ok
W1206 09:42:24.270746  403288 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1206 09:42:24.270936  403288 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:42:24.270961  403288 addons.go:70] Setting dashboard=true in profile "functional-878866"
I1206 09:42:24.270979  403288 addons.go:239] Setting addon dashboard=true in "functional-878866"
I1206 09:42:24.271018  403288 host.go:66] Checking if "functional-878866" exists ...
I1206 09:42:24.273920  403288 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1206 09:42:24.275002  403288 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1206 09:42:24.275972  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1206 09:42:24.275987  403288 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1206 09:42:24.278330  403288 main.go:143] libmachine: domain functional-878866 has defined MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:42:24.278695  403288 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:ce:27", ip: ""} in network mk-functional-878866: {Iface:virbr1 ExpiryTime:2025-12-06 10:39:33 +0000 UTC Type:0 Mac:52:54:00:06:ce:27 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-878866 Clientid:01:52:54:00:06:ce:27}
I1206 09:42:24.278735  403288 main.go:143] libmachine: domain functional-878866 has defined IP address 192.168.39.195 and MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:42:24.278893  403288 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-878866/id_rsa Username:docker}
I1206 09:42:24.410295  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1206 09:42:24.410319  403288 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1206 09:42:24.461725  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1206 09:42:24.461747  403288 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1206 09:42:24.489353  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1206 09:42:24.489385  403288 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1206 09:42:24.522118  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1206 09:42:24.522150  403288 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1206 09:42:24.569010  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1206 09:42:24.569033  403288 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1206 09:42:24.601775  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1206 09:42:24.601807  403288 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1206 09:42:24.629168  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1206 09:42:24.629196  403288 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1206 09:42:24.674188  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1206 09:42:24.674214  403288 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1206 09:42:24.697075  403288 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:42:24.697097  403288 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1206 09:42:24.725038  403288 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:42:25.689650  403288 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-878866 addons enable metrics-server

                                                
                                                
I1206 09:42:25.690627  403288 addons.go:202] Writing out "functional-878866" config to set dashboard=true...
W1206 09:42:25.690970  403288 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1206 09:42:25.691952  403288 kapi.go:59] client config for functional-878866: &rest.Config{Host:"https://192.168.39.195:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.key", CAFile:"/home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1206 09:42:25.692619  403288 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1206 09:42:25.692650  403288 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1206 09:42:25.692658  403288 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1206 09:42:25.692664  403288 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1206 09:42:25.692671  403288 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1206 09:42:25.701475  403288 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  4496c2e5-041f-44a9-9ee9-6a8613e90182 830 0 2025-12-06 09:42:25 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-06 09:42:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.187.143,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.187.143],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1206 09:42:25.701608  403288 out.go:285] * Launching proxy ...
* Launching proxy ...
I1206 09:42:25.701662  403288 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-878866 proxy --port 36195]
I1206 09:42:25.702028  403288 dashboard.go:159] Waiting for kubectl to output host:port ...
I1206 09:42:25.763046  403288 out.go:203] 
W1206 09:42:25.764040  403288 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1206 09:42:25.764056  403288 out.go:285] * 
* 
W1206 09:42:25.768218  403288 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1206 09:42:25.769256  403288 out.go:203] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-878866 -n functional-878866
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 logs -n 25: (1.392642989s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-878866 ssh -- ls -la /mount-9p                                                                                                         │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo umount -f /mount-9p                                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ image     │ functional-878866 image ls                                                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount1                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ mount     │ -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount1 --alsologtostderr -v=1                │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ mount     │ -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount2 --alsologtostderr -v=1                │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ image     │ functional-878866 image save --daemon kicbase/echo-server:functional-878866 --alsologtostderr                                                     │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh echo hello                                                                                                                  │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount1                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh cat /etc/hostname                                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount2                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ addons    │ functional-878866 addons list                                                                                                                     │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount3                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ addons    │ functional-878866 addons list -o json                                                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ start     │ -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ mount     │ -p functional-878866 --kill=true                                                                                                                  │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ start     │ -p functional-878866 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ start     │ -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-878866 --alsologtostderr -v=1                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/387687.pem                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /usr/share/ca-certificates/387687.pem                                                                              │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/3876872.pem                                                                                         │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /usr/share/ca-certificates/3876872.pem                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:42:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:42:24.019904  403261 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:42:24.020138  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020149  403261 out.go:374] Setting ErrFile to fd 2...
	I1206 09:42:24.020155  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020466  403261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:42:24.020924  403261 out.go:368] Setting JSON to false
	I1206 09:42:24.021821  403261 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8694,"bootTime":1765005450,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:42:24.021904  403261 start.go:143] virtualization: kvm guest
	I1206 09:42:24.023333  403261 out.go:179] * [functional-878866] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:42:24.025176  403261 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:42:24.025162  403261 notify.go:221] Checking for updates...
	I1206 09:42:24.026373  403261 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:42:24.027496  403261 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:42:24.028692  403261 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:42:24.029895  403261 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:42:24.031067  403261 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:42:24.020670  403254 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.021222  403254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.060370  403254 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:42:24.061450  403254 start.go:309] selected driver: kvm2
	I1206 09:42:24.061471  403254 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.061626  403254 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.063197  403254 cni.go:84] Creating CNI manager for ""
	I1206 09:42:24.063287  403254 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:42:24.063349  403254 start.go:353] cluster config:
	{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.065378  403254 out.go:179] * dry-run validation complete!
	I1206 09:42:24.032603  403261 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.033150  403261 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.072141  403261 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:42:24.073419  403261 start.go:309] selected driver: kvm2
	I1206 09:42:24.073436  403261 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.073570  403261 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.075812  403261 out.go:203] 
	W1206 09:42:24.076830  403261 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:42:24.077921  403261 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a655845f1732c       56cc512116c8f       7 seconds ago        Exited              mount-munger              0                   58726284f6200       busybox-mount                               default
	113437169b1c3       6e38f40d628db       20 seconds ago       Running             storage-provisioner       4                   dbe75fc56a826       storage-provisioner                         kube-system
	bdb6875900eef       45f3cc72d235f       34 seconds ago       Running             kube-controller-manager   4                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	468bbd4cc840f       aa5e3ebc0dfed       35 seconds ago       Running             coredns                   2                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	242f635ee3d72       8a4ded35a3eb1       35 seconds ago       Running             kube-proxy                2                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	dd7657ebf0fb0       6e38f40d628db       35 seconds ago       Exited              storage-provisioner       3                   dbe75fc56a826       storage-provisioner                         kube-system
	bd0d67905e6c1       aa9d02839d8de       38 seconds ago       Running             kube-apiserver            0                   17a7847a72c32       kube-apiserver-functional-878866            kube-system
	27cb54bb37729       7bb6219ddab95       44 seconds ago       Running             kube-scheduler            2                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	5c87b2b354447       a3e246e9556e9       45 seconds ago       Running             etcd                      2                   638d6d6fc7944       etcd-functional-878866                      kube-system
	301ce41fdfafe       45f3cc72d235f       45 seconds ago       Exited              kube-controller-manager   3                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	d0bf8f349bbc2       a3e246e9556e9       About a minute ago   Exited              etcd                      1                   638d6d6fc7944       etcd-functional-878866                      kube-system
	9f4729263a3b0       7bb6219ddab95       About a minute ago   Exited              kube-scheduler            1                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	8ab3cdb5a27f5       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   1                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	fb6aa836f3149       8a4ded35a3eb1       About a minute ago   Exited              kube-proxy                1                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	
	
	==> containerd <==
	Dec 06 09:42:21 functional-878866 containerd[4558]: time="2025-12-06T09:42:21.856112915Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Dec 06 09:42:21 functional-878866 containerd[4558]: time="2025-12-06T09:42:21.863645700Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-878866\" returns successfully"
	Dec 06 09:42:22 functional-878866 containerd[4558]: time="2025-12-06T09:42:22.503929656Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-878866\""
	Dec 06 09:42:22 functional-878866 containerd[4558]: time="2025-12-06T09:42:22.511314264Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:42:22 functional-878866 containerd[4558]: time="2025-12-06T09:42:22.511706259Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-878866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.357856842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:sp-pod,Uid:ffe9435b-fd28-4a28-8bbe-994ce1895e67,Namespace:default,Attempt:0,}"
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.524772127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.524857405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.524954850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.525068558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.641015546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:sp-pod,Uid:ffe9435b-fd28-4a28-8bbe-994ce1895e67,Namespace:default,Attempt:0,} returns sandbox id \"90cbea43d6d2d62a548371f37e163fd8f35c67d0bb3dac933189fc6f23e2475a\""
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.647097255Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.650478795Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:42:24 functional-878866 containerd[4558]: time="2025-12-06T09:42:24.909245219Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.578086389Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.578109282Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.710627687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-node-connect-9f67c86d4-94flb,Uid:ccbb1545-75f9-4cb6-a66a-541dd12483f3,Namespace:default,Attempt:0,}"
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.851190456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.851272331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.851294114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.851668573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.933912116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-node-connect-9f67c86d4-94flb,Uid:ccbb1545-75f9-4cb6-a66a-541dd12483f3,Namespace:default,Attempt:0,} returns sandbox id \"c8fc15fee3d037e0a87cb47b9c0469198f426585bc1225eb5b4fad5586cf2c38\""
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.937032878Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 06 09:42:25 functional-878866 containerd[4558]: time="2025-12-06T09:42:25.940868204Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:42:26 functional-878866 containerd[4558]: time="2025-12-06T09:42:26.195888828Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [468bbd4cc840f32413f20e1e8a02e5a4cb7382aedb4c8a5909efc9aab6bf840a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54038 - 4803 "HINFO IN 6012224714022320600.6695806507940078930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.498138699s
	
	
	==> coredns [8ab3cdb5a27f5110ee0a75f3126afadad9de822ca2418dec4c39630836e67768] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44124 - 54735 "HINFO IN 6652856036360049109.7864262208624346140. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079558221s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-878866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-878866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-878866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_39_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:39:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-878866
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:42:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:41:50 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:41:50 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:41:50 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:41:50 +0000   Sat, 06 Dec 2025 09:39:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    functional-878866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a7ede37b9346d29806749a6624cb26
	  System UUID:                d8a7ede3-7b93-46d2-9806-749a6624cb26
	  Boot ID:                    201bfa39-c6a0-473d-92c7-ea19f1cbce81
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-dxgxn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     hello-node-connect-9f67c86d4-94flb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 coredns-7d764666f9-9mcjc                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m32s
	  kube-system                 etcd-functional-878866                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m38s
	  kube-system                 kube-apiserver-functional-878866              250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-functional-878866     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-proxy-nv7xx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-scheduler-functional-878866              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-pxs66    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vrq6x          0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                Age    From             Message
	  ----    ------                ----   ----             -------
	  Normal  RegisteredNode        2m33s  node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  CIDRAssignmentFailed  2m33s  cidrAllocator    Node functional-878866 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        77s    node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  RegisteredNode        32s    node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	
	
	==> dmesg <==
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000044] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000568] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.183064] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084534] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104220] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.120351] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.063268] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:40] kauditd_printk_skb: 276 callbacks suppressed
	[ +32.658143] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.884559] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.057660] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.250628] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.162312] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 6 09:41] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.117477] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.001336] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.232546] kauditd_printk_skb: 55 callbacks suppressed
	[  +3.667647] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 6 09:42] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.287607] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.861371] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000482] kauditd_printk_skb: 67 callbacks suppressed
	
	
	==> etcd [5c87b2b35444796d5546b4df37410d5bf4723a21ef12d6bf8f413569ca270286] <==
	{"level":"warn","ts":"2025-12-06T09:41:49.410772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.422042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.432754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.440870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.448509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.456640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.464438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.472941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.485430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.495353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.511314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.524369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.531963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.539221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.547392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.556536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.568757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.578779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.586547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.594627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.602424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.613696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.620753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.628230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.703265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	
	
	==> etcd [d0bf8f349bbc290485d8210fdd2f3fb4eb8be97bd7679ccbac00714d84bff7cd] <==
	{"level":"warn","ts":"2025-12-06T09:41:05.651066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.667701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.680961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.685784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.697007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.706722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.756545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39140","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:41:41.304782Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:41:41.304851Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	{"level":"error","ts":"2025-12-06T09:41:41.304928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.304984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.306703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.306771Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"324857e3fe6e5c62"}
	{"level":"info","ts":"2025-12-06T09:41:41.306847Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T09:41:41.306875Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307236Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307312Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307640Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307777Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307788Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310487Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"error","ts":"2025-12-06T09:41:41.310547Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310648Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2025-12-06T09:41:41.310666Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	
	
	==> kernel <==
	 09:42:26 up 3 min,  0 users,  load average: 1.34, 0.59, 0.23
	Linux functional-878866 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [bd0d67905e6c1f2134865dda105fed233a0a83a241a0356900579bf523721f2d] <==
	I1206 09:41:50.417394       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:41:50.417399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:41:50.417403       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:41:50.434798       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:41:50.465437       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:41:50.469440       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:41:50.473406       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:50.491599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:41:50.497599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:41:51.145164       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:41:51.275751       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1206 09:41:51.812960       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I1206 09:41:51.814646       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:41:51.820722       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:41:52.396418       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:41:52.443361       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:41:52.471343       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:41:52.482831       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:41:54.504347       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:42:11.434844       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.54.232"}
	I1206 09:42:15.521981       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.151.210"}
	I1206 09:42:25.195765       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:42:25.470176       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.252.61"}
	I1206 09:42:25.655391       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.187.143"}
	I1206 09:42:25.673856       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.202.77"}
	
	
	==> kube-controller-manager [301ce41fdfafed40e68e49ee314f81bbc53e16ad44cc043d94fd73225b9fecb3] <==
	I1206 09:41:41.658756       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:41:41.672446       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1206 09:41:41.672485       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:41.674118       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 09:41:41.674205       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 09:41:41.674208       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 09:41:41.674341       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:51.681697       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bdb6875900eefdb8a5d6f908a4e221a38a6d54ba97657137646fc200dba38a89] <==
	I1206 09:41:54.128709       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119713       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128530       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128535       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128541       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128545       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128972       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128978       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120273       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128511       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120289       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119701       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.141422       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.216177       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221533       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221643       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:41:54.221649       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1206 09:42:25.327816       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.333329       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.341834       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375193       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375293       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.416821       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.422484       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.432393       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [242f635ee3d72904c3cbe96ad4ab51fe3b6e05ba970b68336d7568b6dc232a80] <==
	I1206 09:41:51.821997       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:51.925315       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:51.925336       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:41:51.925415       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:41:51.964879       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:41:51.964934       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:41:51.964953       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:41:51.974241       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:41:51.974520       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:41:51.974531       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:51.978887       1 config.go:200] "Starting service config controller"
	I1206 09:41:51.978913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:41:51.978938       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:41:51.978943       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:41:51.978952       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:41:51.978956       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:41:51.979291       1 config.go:309] "Starting node config controller"
	I1206 09:41:51.979296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:41:51.979308       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:41:52.079337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:52.079359       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:41:52.079396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fb6aa836f31492adc7d3b470fb9e656f39e0c5c2645af9fe9a1150d9e1c0e275] <==
	I1206 09:40:56.864442       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:40:56.864503       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:40:56.901030       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:40:56.901092       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:40:56.901132       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:40:56.910200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:40:56.910549       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:40:56.910625       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:56.916862       1 config.go:200] "Starting service config controller"
	I1206 09:40:56.916898       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:40:56.916912       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:40:56.916916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:40:56.916926       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:40:56.916929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:40:56.918717       1 config.go:309] "Starting node config controller"
	I1206 09:40:56.918729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:40:56.918734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1206 09:40:56.919143       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.195:8441: connect: connection refused"
	E1206 09:41:06.427779       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]"
	I1206 09:41:06.517987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:16.917989       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:41:17.418051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [27cb54bb377297f85fd874a25db94789aca7a4095cbd4184870c7fce9e0dcd66] <==
	I1206 09:41:42.817226       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:41:42.823657       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.195:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.195:8441: connect: connection refused
	W1206 09:41:42.823750       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:41:42.823769       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:41:42.832420       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:41:42.832436       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:42.835185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:41:42.835306       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:42.835368       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:42.835534       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:50.324649       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:41:50.325177       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:41:50.325711       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:41:50.332323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:41:50.337052       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:41:50.337336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I1206 09:41:50.935781       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9f4729263a3b0d4ea91d2b994e07e1f3295d8db7192fee9b108c8f73e694abcd] <==
	I1206 09:40:47.309076       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:40:55.199596       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:40:55.199724       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:55.205483       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:40:55.205640       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 09:40:55.205653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.205669       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:40:55.208848       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:40:55.208881       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.208906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:40:55.209295       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.305768       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.308994       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.310391       1 shared_informer.go:377] "Caches are synced"
	E1206 09:41:06.338108       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1206 09:41:06.359751       1 reflector.go:204] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:41:06.427066       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1206 09:41:41.397323       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:41:41.397810       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:41:41.398141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:41.398167       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1206 09:41:41.398479       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:41:41.398664       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:41:41.398896       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:41:41.399017       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 09:42:20 functional-878866 kubelet[5363]: I1206 09:42:20.737385    5363 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/5b258008-690e-46ac-95e9-db4745241e5c-test-volume\") on node \"functional-878866\" DevicePath \"\""
	Dec 06 09:42:21 functional-878866 kubelet[5363]: I1206 09:42:21.337290    5363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58726284f6200af89b249ad5c17efba811230b51c236114a44c9cae4af379c01"
	Dec 06 09:42:24 functional-878866 kubelet[5363]: I1206 09:42:24.166261    5363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15bbb43d-9427-453c-8b2c-4c0cb7ebb869\" (UniqueName: \"kubernetes.io/host-path/ffe9435b-fd28-4a28-8bbe-994ce1895e67-pvc-15bbb43d-9427-453c-8b2c-4c0cb7ebb869\") pod \"sp-pod\" (UID: \"ffe9435b-fd28-4a28-8bbe-994ce1895e67\") " pod="default/sp-pod"
	Dec 06 09:42:24 functional-878866 kubelet[5363]: I1206 09:42:24.166305    5363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvcd\" (UniqueName: \"kubernetes.io/projected/ffe9435b-fd28-4a28-8bbe-994ce1895e67-kube-api-access-lzvcd\") pod \"sp-pod\" (UID: \"ffe9435b-fd28-4a28-8bbe-994ce1895e67\") " pod="default/sp-pod"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: E1206 09:42:25.409372    5363 status_manager.go:1045] "Failed to get status for pod" err="pods \"hello-node-connect-9f67c86d4-94flb\" is forbidden: User \"system:node:functional-878866\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'functional-878866' and this object" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3" pod="default/hello-node-connect-9f67c86d4-94flb"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: E1206 09:42:25.488952    5363 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:functional-878866\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-878866' and this object"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: E1206 09:42:25.503947    5363 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:functional-878866\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-878866' and this object" logger="UnhandledError" reflector="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: E1206 09:42:25.579101    5363 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: E1206 09:42:25.579218    5363 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: E1206 09:42:25.579626    5363 kuberuntime_manager.go:1664] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(ffe9435b-fd28-4a28-8bbe-994ce1895e67): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: E1206 09:42:25.579704    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: I1206 09:42:25.579750    5363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ca4a50bc-be9f-42d9-8667-c0c28149a805-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-vrq6x\" (UID: \"ca4a50bc-be9f-42d9-8667-c0c28149a805\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: I1206 09:42:25.579783    5363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2xk\" (UniqueName: \"kubernetes.io/projected/ccbb1545-75f9-4cb6-a66a-541dd12483f3-kube-api-access-nc2xk\") pod \"hello-node-connect-9f67c86d4-94flb\" (UID: \"ccbb1545-75f9-4cb6-a66a-541dd12483f3\") " pod="default/hello-node-connect-9f67c86d4-94flb"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: I1206 09:42:25.579799    5363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxz9k\" (UniqueName: \"kubernetes.io/projected/ca4a50bc-be9f-42d9-8667-c0c28149a805-kube-api-access-nxz9k\") pod \"kubernetes-dashboard-b84665fb8-vrq6x\" (UID: \"ca4a50bc-be9f-42d9-8667-c0c28149a805\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: I1206 09:42:25.680388    5363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bdd45c49-4f7d-4c58-bf71-55d765230fe9-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-pxs66\" (UID: \"bdd45c49-4f7d-4c58-bf71-55d765230fe9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66"
	Dec 06 09:42:25 functional-878866 kubelet[5363]: I1206 09:42:25.680445    5363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgkq\" (UniqueName: \"kubernetes.io/projected/bdd45c49-4f7d-4c58-bf71-55d765230fe9-kube-api-access-crgkq\") pod \"dashboard-metrics-scraper-5565989548-pxs66\" (UID: \"bdd45c49-4f7d-4c58-bf71-55d765230fe9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66"
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.362677    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.692953    5363 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.693070    5363 projected.go:196] Error preparing data for projected volume kube-api-access-nxz9k for pod kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x: failed to sync configmap cache: timed out waiting for the condition
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.693201    5363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca4a50bc-be9f-42d9-8667-c0c28149a805-kube-api-access-nxz9k podName:ca4a50bc-be9f-42d9-8667-c0c28149a805 nodeName:}" failed. No retries permitted until 2025-12-06 09:42:27.193176662 +0000 UTC m=+39.253378357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nxz9k" (UniqueName: "kubernetes.io/projected/ca4a50bc-be9f-42d9-8667-c0c28149a805-kube-api-access-nxz9k") pod "kubernetes-dashboard-b84665fb8-vrq6x" (UID: "ca4a50bc-be9f-42d9-8667-c0c28149a805") : failed to sync configmap cache: timed out waiting for the condition
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.848061    5363 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.848108    5363 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.848273    5363 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-9f67c86d4-94flb_default(ccbb1545-75f9-4cb6-a66a-541dd12483f3): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:42:26 functional-878866 kubelet[5363]: E1206 09:42:26.848306    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:42:27 functional-878866 kubelet[5363]: E1206 09:42:27.367803    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	
	
	==> storage-provisioner [113437169b1c37974e96925307abdbb55cfc82e538a9033341e8c43d02a37a4d] <==
	W1206 09:42:06.208957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:09.664051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:13.924851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:17.529979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:20.586327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:23.610527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:23.619739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:42:23.619852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:42:23.620010       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-878866_4dffa0d0-9b4c-490d-9133-b203fcf5c028!
	I1206 09:42:23.623797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"26fa7e50-2e3f-48c6-b961-725b0f832aba", APIVersion:"v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-878866_4dffa0d0-9b4c-490d-9133-b203fcf5c028 became leader
	W1206 09:42:23.625840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:23.637409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:42:23.720425       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-878866_4dffa0d0-9b4c-490d-9133-b203fcf5c028!
	I1206 09:42:23.721136       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1206 09:42:23.721697       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"15bbb43d-9427-453c-8b2c-4c0cb7ebb869", APIVersion:"v1", ResourceVersion:"741", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1206 09:42:23.721284       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    a4a3f506-c2ea-452a-a94f-b0acddbcf6dc 359 0 2025-12-06 09:39:56 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-12-06 09:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-15bbb43d-9427-453c-8b2c-4c0cb7ebb869 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  15bbb43d-9427-453c-8b2c-4c0cb7ebb869 741 0 2025-12-06 09:42:22 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-12-06 09:42:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-12-06 09:42:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1206 09:42:23.722340       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-15bbb43d-9427-453c-8b2c-4c0cb7ebb869" provisioned
	I1206 09:42:23.722394       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1206 09:42:23.722407       1 volume_store.go:212] Trying to save persistentvolume "pvc-15bbb43d-9427-453c-8b2c-4c0cb7ebb869"
	I1206 09:42:23.742920       1 volume_store.go:219] persistentvolume "pvc-15bbb43d-9427-453c-8b2c-4c0cb7ebb869" saved
	I1206 09:42:23.744025       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"15bbb43d-9427-453c-8b2c-4c0cb7ebb869", APIVersion:"v1", ResourceVersion:"741", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-15bbb43d-9427-453c-8b2c-4c0cb7ebb869
	W1206 09:42:25.647718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:42:25.654246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dd7657ebf0fb090991f2e33ae757c5f56eeb3eaf14b5863f3aedd29af194ccd3] <==
	I1206 09:41:51.647942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:41:51.652108       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
helpers_test.go:269: (dbg) Run:  kubectl --context functional-878866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1 (93.571799ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  containerd://a655845f1732c2d1de014118f364a31fd9020fa965e1a3db715a9272b206b9f5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:42:18 +0000
	      Finished:     Sat, 06 Dec 2025 09:42:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxrxb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fxrxb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-878866
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 727ms (727ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    9s    kubelet            Container created
	  Normal  Started    9s    kubelet            Container started
	
	
	Name:             hello-node-5758569b79-dxgxn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k6dw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9k6dw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  12s   default-scheduler  Successfully assigned default/hello-node-5758569b79-dxgxn to functional-878866
	  Normal   Pulling    11s   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     10s   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     10s   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-94flb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nc2xk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nc2xk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  2s    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-94flb to functional-878866
	  Normal   Pulling    2s    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     1s    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     1s    kubelet            Error: ErrImagePull
	  Normal   BackOff    0s    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzvcd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lzvcd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  3s    default-scheduler  Successfully assigned default/sp-pod to functional-878866
	  Normal   Pulling    3s    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2s    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2s    kubelet            Error: ErrImagePull
	  Normal   BackOff    1s    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     1s    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-pxs66" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-vrq6x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-878866 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-878866 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-94flb" [ccbb1545-75f9-4cb6-a66a-541dd12483f3] Pending
helpers_test.go:352: "hello-node-connect-9f67c86d4-94flb" [ccbb1545-75f9-4cb6-a66a-541dd12483f3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-06 09:52:25.704457603 +0000 UTC m=+2518.991834739
functional_test.go:1645: (dbg) Run:  kubectl --context functional-878866 describe po hello-node-connect-9f67c86d4-94flb -n default
functional_test.go:1645: (dbg) kubectl --context functional-878866 describe po hello-node-connect-9f67c86d4-94flb -n default:
Name:             hello-node-connect-9f67c86d4-94flb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-878866/192.168.39.195
Start Time:       Sat, 06 Dec 2025 09:42:25 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nc2xk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nc2xk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-94flb to functional-878866
Warning  Failed     9m21s                   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x4 over 9m59s)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m (x5 over 9m59s)      kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-878866 logs hello-node-connect-9f67c86d4-94flb -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-878866 logs hello-node-connect-9f67c86d4-94flb -n default: exit status 1 (62.473875ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-94flb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-878866 logs hello-node-connect-9f67c86d4-94flb -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-878866 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-94flb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-878866/192.168.39.195
Start Time:       Sat, 06 Dec 2025 09:42:25 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nc2xk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nc2xk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-94flb to functional-878866
Warning  Failed     9m21s                   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x4 over 9m59s)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m (x5 over 9m59s)      kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-878866 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-878866 logs -l app=hello-node-connect: exit status 1 (62.869227ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-94flb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-878866 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-878866 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.252.61
IPs:                      10.97.252.61
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31658/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-878866 -n functional-878866
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 logs -n 25: (1.23204913s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-878866 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ start          │ -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-878866 --alsologtostderr -v=1                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/387687.pem                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /usr/share/ca-certificates/387687.pem                                                                              │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/3876872.pem                                                                                         │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /usr/share/ca-certificates/3876872.pem                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/test/nested/copy/387687/hosts                                                                                 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ image          │ functional-878866 image ls --format short --alsologtostderr                                                                                       │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls --format yaml --alsologtostderr                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ ssh            │ functional-878866 ssh pgrep buildkitd                                                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │                     │
	│ image          │ functional-878866 image build -t localhost/my-image:functional-878866 testdata/build --alsologtostderr                                            │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls                                                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls --format json --alsologtostderr                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls --format table --alsologtostderr                                                                                       │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ update-context │ functional-878866 update-context --alsologtostderr -v=2                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ update-context │ functional-878866 update-context --alsologtostderr -v=2                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ update-context │ functional-878866 update-context --alsologtostderr -v=2                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ service        │ functional-878866 service list                                                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ service        │ functional-878866 service list -o json                                                                                                            │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ service        │ functional-878866 service --namespace=default --https --url hello-node                                                                            │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ service        │ functional-878866 service hello-node --url --format={{.IP}}                                                                                       │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ service        │ functional-878866 service hello-node --url                                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:42:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:42:24.019904  403261 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:42:24.020138  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020149  403261 out.go:374] Setting ErrFile to fd 2...
	I1206 09:42:24.020155  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020466  403261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:42:24.020924  403261 out.go:368] Setting JSON to false
	I1206 09:42:24.021821  403261 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8694,"bootTime":1765005450,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:42:24.021904  403261 start.go:143] virtualization: kvm guest
	I1206 09:42:24.023333  403261 out.go:179] * [functional-878866] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:42:24.025176  403261 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:42:24.025162  403261 notify.go:221] Checking for updates...
	I1206 09:42:24.026373  403261 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:42:24.027496  403261 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:42:24.028692  403261 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:42:24.029895  403261 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:42:24.031067  403261 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:42:24.020670  403254 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.021222  403254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.060370  403254 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:42:24.061450  403254 start.go:309] selected driver: kvm2
	I1206 09:42:24.061471  403254 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.061626  403254 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.063197  403254 cni.go:84] Creating CNI manager for ""
	I1206 09:42:24.063287  403254 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:42:24.063349  403254 start.go:353] cluster config:
	{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.065378  403254 out.go:179] * dry-run validation complete!
	I1206 09:42:24.032603  403261 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.033150  403261 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.072141  403261 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:42:24.073419  403261 start.go:309] selected driver: kvm2
	I1206 09:42:24.073436  403261 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.073570  403261 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.075812  403261 out.go:203] 
	W1206 09:42:24.076830  403261 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:42:24.077921  403261 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a655845f1732c       56cc512116c8f       10 minutes ago      Exited              mount-munger              0                   58726284f6200       busybox-mount                               default
	113437169b1c3       6e38f40d628db       10 minutes ago      Running             storage-provisioner       4                   dbe75fc56a826       storage-provisioner                         kube-system
	bdb6875900eef       45f3cc72d235f       10 minutes ago      Running             kube-controller-manager   4                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	468bbd4cc840f       aa5e3ebc0dfed       10 minutes ago      Running             coredns                   2                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	242f635ee3d72       8a4ded35a3eb1       10 minutes ago      Running             kube-proxy                2                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	dd7657ebf0fb0       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       3                   dbe75fc56a826       storage-provisioner                         kube-system
	bd0d67905e6c1       aa9d02839d8de       10 minutes ago      Running             kube-apiserver            0                   17a7847a72c32       kube-apiserver-functional-878866            kube-system
	27cb54bb37729       7bb6219ddab95       10 minutes ago      Running             kube-scheduler            2                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	5c87b2b354447       a3e246e9556e9       10 minutes ago      Running             etcd                      2                   638d6d6fc7944       etcd-functional-878866                      kube-system
	301ce41fdfafe       45f3cc72d235f       10 minutes ago      Exited              kube-controller-manager   3                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	d0bf8f349bbc2       a3e246e9556e9       11 minutes ago      Exited              etcd                      1                   638d6d6fc7944       etcd-functional-878866                      kube-system
	9f4729263a3b0       7bb6219ddab95       11 minutes ago      Exited              kube-scheduler            1                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	8ab3cdb5a27f5       aa5e3ebc0dfed       11 minutes ago      Exited              coredns                   1                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	fb6aa836f3149       8a4ded35a3eb1       11 minutes ago      Exited              kube-proxy                1                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	
	
	==> containerd <==
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.092103262Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.094650671Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.350367171Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:08 functional-878866 containerd[4558]: time="2025-12-06T09:48:08.023441140Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:08 functional-878866 containerd[4558]: time="2025-12-06T09:48:08.023617968Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.096615636Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.101341876Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.348196563Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:13 functional-878866 containerd[4558]: time="2025-12-06T09:48:13.008714340Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:13 functional-878866 containerd[4558]: time="2025-12-06T09:48:13.008829513Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.094970883Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.099262110Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.352867252Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:17 functional-878866 containerd[4558]: time="2025-12-06T09:48:17.010465023Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:17 functional-878866 containerd[4558]: time="2025-12-06T09:48:17.010629820Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.558937531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.559046428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.559062221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.559758391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.687106280Z" level=info msg="shim disconnected" id=g0c8jtfcz5luk7zauxomyoodt namespace=k8s.io
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.687542771Z" level=warning msg="cleaning up after shim disconnected" id=g0c8jtfcz5luk7zauxomyoodt namespace=k8s.io
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.687602418Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 06 09:48:29 functional-878866 containerd[4558]: time="2025-12-06T09:48:29.010714179Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-878866\""
	Dec 06 09:48:29 functional-878866 containerd[4558]: time="2025-12-06T09:48:29.018489926Z" level=info msg="ImageCreate event name:\"sha256:e9c335317f52720f206b2db3c5f6d8c7fbed1726ccd36409ba6505bf0023fbb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:48:29 functional-878866 containerd[4558]: time="2025-12-06T09:48:29.021848809Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-878866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> coredns [468bbd4cc840f32413f20e1e8a02e5a4cb7382aedb4c8a5909efc9aab6bf840a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54038 - 4803 "HINFO IN 6012224714022320600.6695806507940078930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.498138699s
	
	
	==> coredns [8ab3cdb5a27f5110ee0a75f3126afadad9de822ca2418dec4c39630836e67768] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44124 - 54735 "HINFO IN 6652856036360049109.7864262208624346140. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079558221s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-878866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-878866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-878866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_39_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:39:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-878866
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    functional-878866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a7ede37b9346d29806749a6624cb26
	  System UUID:                d8a7ede3-7b93-46d2-9806-749a6624cb26
	  Boot ID:                    201bfa39-c6a0-473d-92c7-ea19f1cbce81
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-dxgxn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-94flb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-x8f4x                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    9m58s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-9mcjc                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-878866                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-878866              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-878866     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nv7xx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-878866              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-pxs66    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vrq6x          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                Age   From             Message
	  ----    ------                ----  ----             -------
	  Normal  RegisteredNode        12m   node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  CIDRAssignmentFailed  12m   cidrAllocator    Node functional-878866 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        11m   node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  RegisteredNode        10m   node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	
	
	==> dmesg <==
	[  +1.183064] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084534] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104220] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.120351] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.063268] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:40] kauditd_printk_skb: 276 callbacks suppressed
	[ +32.658143] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.884559] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.057660] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.250628] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.162312] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 6 09:41] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.117477] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.001336] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.232546] kauditd_printk_skb: 55 callbacks suppressed
	[  +3.667647] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 6 09:42] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.287607] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.861371] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000482] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.941950] kauditd_printk_skb: 194 callbacks suppressed
	[Dec 6 09:48] crun[8706]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.126578] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [5c87b2b35444796d5546b4df37410d5bf4723a21ef12d6bf8f413569ca270286] <==
	{"level":"warn","ts":"2025-12-06T09:41:49.440870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.448509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.456640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.464438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.472941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.485430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.495353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.511314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.524369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.531963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.539221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.547392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.556536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.568757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.578779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.586547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.594627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.602424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.613696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.620753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.628230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.703265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:51:49.043876Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1401}
	{"level":"info","ts":"2025-12-06T09:51:49.068804Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1401,"took":"23.821634ms","hash":946335887,"current-db-size-bytes":4071424,"current-db-size":"4.1 MB","current-db-size-in-use-bytes":2134016,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-06T09:51:49.068838Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":946335887,"revision":1401,"compact-revision":-1}
	
	
	==> etcd [d0bf8f349bbc290485d8210fdd2f3fb4eb8be97bd7679ccbac00714d84bff7cd] <==
	{"level":"warn","ts":"2025-12-06T09:41:05.651066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.667701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.680961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.685784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.697007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.706722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.756545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39140","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:41:41.304782Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:41:41.304851Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	{"level":"error","ts":"2025-12-06T09:41:41.304928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.304984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.306703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.306771Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"324857e3fe6e5c62"}
	{"level":"info","ts":"2025-12-06T09:41:41.306847Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T09:41:41.306875Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307236Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307312Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307640Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307777Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307788Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310487Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"error","ts":"2025-12-06T09:41:41.310547Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310648Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2025-12-06T09:41:41.310666Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	
	
	==> kernel <==
	 09:52:27 up 13 min,  0 users,  load average: 0.47, 0.37, 0.28
	Linux functional-878866 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [bd0d67905e6c1f2134865dda105fed233a0a83a241a0356900579bf523721f2d] <==
	I1206 09:41:50.417403       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:41:50.434798       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:41:50.465437       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:41:50.469440       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:41:50.473406       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:50.491599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:41:50.497599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:41:51.145164       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:41:51.275751       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1206 09:41:51.812960       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I1206 09:41:51.814646       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:41:51.820722       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:41:52.396418       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:41:52.443361       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:41:52.471343       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:41:52.482831       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:41:54.504347       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:42:11.434844       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.54.232"}
	I1206 09:42:15.521981       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.151.210"}
	I1206 09:42:25.195765       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:42:25.470176       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.252.61"}
	I1206 09:42:25.655391       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.187.143"}
	I1206 09:42:25.673856       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.202.77"}
	I1206 09:42:28.270311       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.110.238"}
	I1206 09:51:50.368752       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [301ce41fdfafed40e68e49ee314f81bbc53e16ad44cc043d94fd73225b9fecb3] <==
	I1206 09:41:41.658756       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:41:41.672446       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1206 09:41:41.672485       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:41.674118       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 09:41:41.674205       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 09:41:41.674208       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 09:41:41.674341       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:51.681697       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bdb6875900eefdb8a5d6f908a4e221a38a6d54ba97657137646fc200dba38a89] <==
	I1206 09:41:54.128709       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119713       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128530       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128535       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128541       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128545       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128972       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128978       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120273       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128511       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120289       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119701       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.141422       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.216177       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221533       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221643       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:41:54.221649       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1206 09:42:25.327816       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.333329       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.341834       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375193       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375293       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.416821       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.422484       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.432393       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [242f635ee3d72904c3cbe96ad4ab51fe3b6e05ba970b68336d7568b6dc232a80] <==
	I1206 09:41:51.821997       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:51.925315       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:51.925336       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:41:51.925415       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:41:51.964879       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:41:51.964934       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:41:51.964953       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:41:51.974241       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:41:51.974520       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:41:51.974531       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:51.978887       1 config.go:200] "Starting service config controller"
	I1206 09:41:51.978913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:41:51.978938       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:41:51.978943       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:41:51.978952       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:41:51.978956       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:41:51.979291       1 config.go:309] "Starting node config controller"
	I1206 09:41:51.979296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:41:51.979308       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:41:52.079337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:52.079359       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:41:52.079396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fb6aa836f31492adc7d3b470fb9e656f39e0c5c2645af9fe9a1150d9e1c0e275] <==
	I1206 09:40:56.864442       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:40:56.864503       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:40:56.901030       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:40:56.901092       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:40:56.901132       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:40:56.910200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:40:56.910549       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:40:56.910625       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:56.916862       1 config.go:200] "Starting service config controller"
	I1206 09:40:56.916898       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:40:56.916912       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:40:56.916916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:40:56.916926       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:40:56.916929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:40:56.918717       1 config.go:309] "Starting node config controller"
	I1206 09:40:56.918729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:40:56.918734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1206 09:40:56.919143       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.195:8441: connect: connection refused"
	E1206 09:41:06.427779       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]"
	I1206 09:41:06.517987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:16.917989       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:41:17.418051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [27cb54bb377297f85fd874a25db94789aca7a4095cbd4184870c7fce9e0dcd66] <==
	I1206 09:41:42.817226       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:41:42.823657       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.195:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.195:8441: connect: connection refused
	W1206 09:41:42.823750       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:41:42.823769       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:41:42.832420       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:41:42.832436       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:42.835185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:41:42.835306       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:42.835368       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:42.835534       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:50.324649       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:41:50.325177       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:41:50.325711       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:41:50.332323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:41:50.337052       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:41:50.337336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I1206 09:41:50.935781       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9f4729263a3b0d4ea91d2b994e07e1f3295d8db7192fee9b108c8f73e694abcd] <==
	I1206 09:40:47.309076       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:40:55.199596       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:40:55.199724       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:55.205483       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:40:55.205640       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 09:40:55.205653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.205669       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:40:55.208848       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:40:55.208881       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.208906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:40:55.209295       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.305768       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.308994       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.310391       1 shared_informer.go:377] "Caches are synced"
	E1206 09:41:06.338108       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1206 09:41:06.359751       1 reflector.go:204] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:41:06.427066       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1206 09:41:41.397323       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:41:41.397810       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:41:41.398141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:41.398167       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1206 09:41:41.398479       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:41:41.398664       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:41:41.398896       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:41:41.399017       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 09:51:54 functional-878866 kubelet[5363]: E1206 09:51:54.094681    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" podUID="ca4a50bc-be9f-42d9-8667-c0c28149a805"
	Dec 06 09:51:55 functional-878866 kubelet[5363]: E1206 09:51:55.092210    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" containerName="dashboard-metrics-scraper"
	Dec 06 09:51:55 functional-878866 kubelet[5363]: E1206 09:51:55.094140    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" podUID="bdd45c49-4f7d-4c58-bf71-55d7
65230fe9"
	Dec 06 09:51:56 functional-878866 kubelet[5363]: E1206 09:51:56.094854    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:51:57 functional-878866 kubelet[5363]: E1206 09:51:57.092275    5363 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-878866" containerName="kube-scheduler"
	Dec 06 09:51:57 functional-878866 kubelet[5363]: E1206 09:51:57.092451    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:51:59 functional-878866 kubelet[5363]: E1206 09:51:59.092743    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:51:59 functional-878866 kubelet[5363]: E1206 09:51:59.093800    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-x8f4x" podUID="c063e131-315a-4d95-80e7-6710bd46865b"
	Dec 06 09:52:06 functional-878866 kubelet[5363]: E1206 09:52:06.091883    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" containerName="kubernetes-dashboard"
	Dec 06 09:52:06 functional-878866 kubelet[5363]: E1206 09:52:06.093344    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" podUID="ca4a50bc-be9f-42d9-8667-c0c28149a805"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.092412    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.092972    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.093199    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.094856    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" podUID="bdd45c49-4f7d-4c58-bf71-55d7
65230fe9"
	Dec 06 09:52:12 functional-878866 kubelet[5363]: E1206 09:52:12.092753    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:52:13 functional-878866 kubelet[5363]: E1206 09:52:13.092788    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-x8f4x" podUID="c063e131-315a-4d95-80e7-6710bd46865b"
	Dec 06 09:52:17 functional-878866 kubelet[5363]: E1206 09:52:17.092002    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" containerName="kubernetes-dashboard"
	Dec 06 09:52:17 functional-878866 kubelet[5363]: E1206 09:52:17.093723    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" podUID="ca4a50bc-be9f-42d9-8667-c0c28149a805"
	Dec 06 09:52:20 functional-878866 kubelet[5363]: E1206 09:52:20.097672    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.092477    5363 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9mcjc" containerName="coredns"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.092916    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.093658    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.094043    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.095466    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" podUID="bdd45c49-4f7d-4c58-bf71-55d7
65230fe9"
	Dec 06 09:52:27 functional-878866 kubelet[5363]: E1206 09:52:27.094347    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-x8f4x" podUID="c063e131-315a-4d95-80e7-6710bd46865b"
	
	
	==> storage-provisioner [113437169b1c37974e96925307abdbb55cfc82e538a9033341e8c43d02a37a4d] <==
	W1206 09:52:02.632197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:04.635662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:04.641755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:06.645493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:06.650796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:08.654720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:08.662681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:10.666840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:10.672941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:12.675870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:12.683229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:14.685638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:14.690513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:16.693993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:16.700187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:18.704489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:18.711234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:20.714802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:20.724109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:22.727494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:22.735691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:24.738933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:24.743347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:26.747442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:26.759740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dd7657ebf0fb090991f2e33ae757c5f56eeb3eaf14b5863f3aedd29af194ccd3] <==
	I1206 09:41:51.647942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:41:51.652108       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
helpers_test.go:269: (dbg) Run:  kubectl --context functional-878866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1 (100.880545ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  containerd://a655845f1732c2d1de014118f364a31fd9020fa965e1a3db715a9272b206b9f5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:42:18 +0000
	      Finished:     Sat, 06 Dec 2025 09:42:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxrxb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fxrxb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-878866
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 727ms (727ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-dxgxn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k6dw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9k6dw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-dxgxn to functional-878866
	  Warning  Failed     8m41s (x3 over 9m54s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m19s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m18s (x2 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m18s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x41 over 10m)      kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x41 over 10m)      kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-94flb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nc2xk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nc2xk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-94flb to functional-878866
	  Warning  Failed     9m23s                 kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m2s (x4 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-844cf969f6-x8f4x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:28 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbjz9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lbjz9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m59s                   default-scheduler  Successfully assigned default/mysql-844cf969f6-x8f4x to functional-878866
	  Normal   Pulling    7m4s (x5 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m3s (x5 over 9m57s)    kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m3s (x5 over 9m57s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m37s (x21 over 9m57s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzvcd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lzvcd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/sp-pod to functional-878866
	  Normal   Pulling    7m16s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-pxs66" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-vrq6x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (369.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6e95d4b1-4423-4162-a5f9-909dca42ab36] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007063956s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-878866 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-878866 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-878866 get pvc myclaim -o=json
I1206 09:42:22.552198  387687 retry.go:31] will retry after 1.281501434s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:15bbb43d-9427-453c-8b2c-4c0cb7ebb869 ResourceVersion:741 Generation:0 CreationTimestamp:2025-12-06 09:42:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a02260 VolumeMode:0xc001a02270 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-878866 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-878866 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ffe9435b-fd28-4a28-8bbe-994ce1895e67] Pending
helpers_test.go:352: "sp-pod" [ffe9435b-fd28-4a28-8bbe-994ce1895e67] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-06 09:48:24.27562747 +0000 UTC m=+2277.563004594
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-878866 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-878866 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-878866/192.168.39.195
Start Time:       Sat, 06 Dec 2025 09:42:24 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzvcd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-lzvcd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-878866
Normal   Pulling    3m13s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m12s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m12s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     50s (x20 over 5m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    38s (x21 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-878866 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-878866 logs sp-pod -n default: exit status 1 (68.69336ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-878866 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-878866 -n functional-878866
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 logs -n 25: (1.237572895s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-878866 ssh sudo umount -f /mount-9p                                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ image     │ functional-878866 image ls                                                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount1                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ mount     │ -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount1 --alsologtostderr -v=1                │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ mount     │ -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount2 --alsologtostderr -v=1                │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ image     │ functional-878866 image save --daemon kicbase/echo-server:functional-878866 --alsologtostderr                                                     │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh echo hello                                                                                                                  │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount1                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh cat /etc/hostname                                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount2                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ addons    │ functional-878866 addons list                                                                                                                     │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh findmnt -T /mount3                                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ addons    │ functional-878866 addons list -o json                                                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ start     │ -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ mount     │ -p functional-878866 --kill=true                                                                                                                  │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ start     │ -p functional-878866 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ start     │ -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-878866 --alsologtostderr -v=1                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/387687.pem                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /usr/share/ca-certificates/387687.pem                                                                              │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/3876872.pem                                                                                         │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /usr/share/ca-certificates/3876872.pem                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh       │ functional-878866 ssh sudo cat /etc/test/nested/copy/387687/hosts                                                                                 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:42:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:42:24.019904  403261 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:42:24.020138  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020149  403261 out.go:374] Setting ErrFile to fd 2...
	I1206 09:42:24.020155  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020466  403261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:42:24.020924  403261 out.go:368] Setting JSON to false
	I1206 09:42:24.021821  403261 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8694,"bootTime":1765005450,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:42:24.021904  403261 start.go:143] virtualization: kvm guest
	I1206 09:42:24.023333  403261 out.go:179] * [functional-878866] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:42:24.025176  403261 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:42:24.025162  403261 notify.go:221] Checking for updates...
	I1206 09:42:24.026373  403261 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:42:24.027496  403261 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:42:24.028692  403261 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:42:24.029895  403261 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:42:24.031067  403261 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:42:24.020670  403254 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.021222  403254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.060370  403254 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:42:24.061450  403254 start.go:309] selected driver: kvm2
	I1206 09:42:24.061471  403254 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.061626  403254 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.063197  403254 cni.go:84] Creating CNI manager for ""
	I1206 09:42:24.063287  403254 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:42:24.063349  403254 start.go:353] cluster config:
	{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.065378  403254 out.go:179] * dry-run validation complete!
	I1206 09:42:24.032603  403261 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.033150  403261 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.072141  403261 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:42:24.073419  403261 start.go:309] selected driver: kvm2
	I1206 09:42:24.073436  403261 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.073570  403261 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.075812  403261 out.go:203] 
	W1206 09:42:24.076830  403261 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:42:24.077921  403261 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a655845f1732c       56cc512116c8f       6 minutes ago       Exited              mount-munger              0                   58726284f6200       busybox-mount                               default
	113437169b1c3       6e38f40d628db       6 minutes ago       Running             storage-provisioner       4                   dbe75fc56a826       storage-provisioner                         kube-system
	bdb6875900eef       45f3cc72d235f       6 minutes ago       Running             kube-controller-manager   4                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	468bbd4cc840f       aa5e3ebc0dfed       6 minutes ago       Running             coredns                   2                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	242f635ee3d72       8a4ded35a3eb1       6 minutes ago       Running             kube-proxy                2                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	dd7657ebf0fb0       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       3                   dbe75fc56a826       storage-provisioner                         kube-system
	bd0d67905e6c1       aa9d02839d8de       6 minutes ago       Running             kube-apiserver            0                   17a7847a72c32       kube-apiserver-functional-878866            kube-system
	27cb54bb37729       7bb6219ddab95       6 minutes ago       Running             kube-scheduler            2                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	5c87b2b354447       a3e246e9556e9       6 minutes ago       Running             etcd                      2                   638d6d6fc7944       etcd-functional-878866                      kube-system
	301ce41fdfafe       45f3cc72d235f       6 minutes ago       Exited              kube-controller-manager   3                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	d0bf8f349bbc2       a3e246e9556e9       7 minutes ago       Exited              etcd                      1                   638d6d6fc7944       etcd-functional-878866                      kube-system
	9f4729263a3b0       7bb6219ddab95       7 minutes ago       Exited              kube-scheduler            1                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	8ab3cdb5a27f5       aa5e3ebc0dfed       7 minutes ago       Exited              coredns                   1                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	fb6aa836f3149       8a4ded35a3eb1       7 minutes ago       Exited              kube-proxy                1                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	
	
	==> containerd <==
	Dec 06 09:48:00 functional-878866 containerd[4558]: time="2025-12-06T09:48:00.227371670Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 06 09:48:00 functional-878866 containerd[4558]: time="2025-12-06T09:48:00.228939946Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:00 functional-878866 containerd[4558]: time="2025-12-06T09:48:00.486870418Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:01 functional-878866 containerd[4558]: time="2025-12-06T09:48:01.148407948Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:01 functional-878866 containerd[4558]: time="2025-12-06T09:48:01.148489474Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Dec 06 09:48:03 functional-878866 containerd[4558]: time="2025-12-06T09:48:03.093044836Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 06 09:48:03 functional-878866 containerd[4558]: time="2025-12-06T09:48:03.096061556Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:03 functional-878866 containerd[4558]: time="2025-12-06T09:48:03.346925376Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:04 functional-878866 containerd[4558]: time="2025-12-06T09:48:04.011344653Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:04 functional-878866 containerd[4558]: time="2025-12-06T09:48:04.011466912Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.092103262Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.094650671Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.350367171Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:08 functional-878866 containerd[4558]: time="2025-12-06T09:48:08.023441140Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:08 functional-878866 containerd[4558]: time="2025-12-06T09:48:08.023617968Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.096615636Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.101341876Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.348196563Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:13 functional-878866 containerd[4558]: time="2025-12-06T09:48:13.008714340Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:13 functional-878866 containerd[4558]: time="2025-12-06T09:48:13.008829513Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.094970883Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.099262110Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.352867252Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:17 functional-878866 containerd[4558]: time="2025-12-06T09:48:17.010465023Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:17 functional-878866 containerd[4558]: time="2025-12-06T09:48:17.010629820Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	
	
	==> coredns [468bbd4cc840f32413f20e1e8a02e5a4cb7382aedb4c8a5909efc9aab6bf840a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54038 - 4803 "HINFO IN 6012224714022320600.6695806507940078930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.498138699s
	
	
	==> coredns [8ab3cdb5a27f5110ee0a75f3126afadad9de822ca2418dec4c39630836e67768] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44124 - 54735 "HINFO IN 6652856036360049109.7864262208624346140. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079558221s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-878866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-878866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-878866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_39_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:39:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-878866
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:48:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:42:51 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:42:51 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:42:51 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:42:51 +0000   Sat, 06 Dec 2025 09:39:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    functional-878866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a7ede37b9346d29806749a6624cb26
	  System UUID:                d8a7ede3-7b93-46d2-9806-749a6624cb26
	  Boot ID:                    201bfa39-c6a0-473d-92c7-ea19f1cbce81
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-dxgxn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  default                     hello-node-connect-9f67c86d4-94flb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  default                     mysql-844cf969f6-x8f4x                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m57s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-7d764666f9-9mcjc                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m31s
	  kube-system                 etcd-functional-878866                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m37s
	  kube-system                 kube-apiserver-functional-878866              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-functional-878866     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-proxy-nv7xx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-scheduler-functional-878866              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-pxs66    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vrq6x          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                Age    From             Message
	  ----    ------                ----   ----             -------
	  Normal  RegisteredNode        8m32s  node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  CIDRAssignmentFailed  8m32s  cidrAllocator    Node functional-878866 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        7m16s  node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  RegisteredNode        6m31s  node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	
	
	==> dmesg <==
	[  +0.000044] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000568] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.183064] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084534] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104220] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.120351] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.063268] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:40] kauditd_printk_skb: 276 callbacks suppressed
	[ +32.658143] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.884559] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.057660] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.250628] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.162312] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 6 09:41] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.117477] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.001336] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.232546] kauditd_printk_skb: 55 callbacks suppressed
	[  +3.667647] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 6 09:42] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.287607] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.861371] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000482] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.941950] kauditd_printk_skb: 194 callbacks suppressed
	
	
	==> etcd [5c87b2b35444796d5546b4df37410d5bf4723a21ef12d6bf8f413569ca270286] <==
	{"level":"warn","ts":"2025-12-06T09:41:49.410772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.422042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.432754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.440870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.448509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.456640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.464438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.472941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.485430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.495353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.511314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.524369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.531963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.539221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.547392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.556536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.568757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.578779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.586547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.594627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.602424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.613696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.620753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.628230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.703265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	
	
	==> etcd [d0bf8f349bbc290485d8210fdd2f3fb4eb8be97bd7679ccbac00714d84bff7cd] <==
	{"level":"warn","ts":"2025-12-06T09:41:05.651066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.667701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.680961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.685784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.697007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.706722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.756545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39140","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:41:41.304782Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:41:41.304851Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	{"level":"error","ts":"2025-12-06T09:41:41.304928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.304984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.306703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.306771Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"324857e3fe6e5c62"}
	{"level":"info","ts":"2025-12-06T09:41:41.306847Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T09:41:41.306875Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307236Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307312Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307640Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307777Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307788Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310487Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"error","ts":"2025-12-06T09:41:41.310547Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310648Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2025-12-06T09:41:41.310666Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	
	
	==> kernel <==
	 09:48:25 up 9 min,  0 users,  load average: 0.41, 0.41, 0.27
	Linux functional-878866 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [bd0d67905e6c1f2134865dda105fed233a0a83a241a0356900579bf523721f2d] <==
	I1206 09:41:50.417399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:41:50.417403       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:41:50.434798       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:41:50.465437       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:41:50.469440       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:41:50.473406       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:50.491599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:41:50.497599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:41:51.145164       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:41:51.275751       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1206 09:41:51.812960       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I1206 09:41:51.814646       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:41:51.820722       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:41:52.396418       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:41:52.443361       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:41:52.471343       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:41:52.482831       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:41:54.504347       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:42:11.434844       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.54.232"}
	I1206 09:42:15.521981       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.151.210"}
	I1206 09:42:25.195765       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:42:25.470176       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.252.61"}
	I1206 09:42:25.655391       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.187.143"}
	I1206 09:42:25.673856       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.202.77"}
	I1206 09:42:28.270311       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.110.238"}
	
	
	==> kube-controller-manager [301ce41fdfafed40e68e49ee314f81bbc53e16ad44cc043d94fd73225b9fecb3] <==
	I1206 09:41:41.658756       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:41:41.672446       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1206 09:41:41.672485       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:41.674118       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 09:41:41.674205       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 09:41:41.674208       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 09:41:41.674341       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:51.681697       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bdb6875900eefdb8a5d6f908a4e221a38a6d54ba97657137646fc200dba38a89] <==
	I1206 09:41:54.128709       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119713       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128530       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128535       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128541       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128545       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128972       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128978       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120273       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128511       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120289       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119701       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.141422       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.216177       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221533       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221643       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:41:54.221649       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1206 09:42:25.327816       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.333329       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.341834       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375193       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375293       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.416821       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.422484       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.432393       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [242f635ee3d72904c3cbe96ad4ab51fe3b6e05ba970b68336d7568b6dc232a80] <==
	I1206 09:41:51.821997       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:51.925315       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:51.925336       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:41:51.925415       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:41:51.964879       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:41:51.964934       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:41:51.964953       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:41:51.974241       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:41:51.974520       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:41:51.974531       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:51.978887       1 config.go:200] "Starting service config controller"
	I1206 09:41:51.978913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:41:51.978938       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:41:51.978943       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:41:51.978952       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:41:51.978956       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:41:51.979291       1 config.go:309] "Starting node config controller"
	I1206 09:41:51.979296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:41:51.979308       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:41:52.079337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:52.079359       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:41:52.079396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fb6aa836f31492adc7d3b470fb9e656f39e0c5c2645af9fe9a1150d9e1c0e275] <==
	I1206 09:40:56.864442       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:40:56.864503       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:40:56.901030       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:40:56.901092       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:40:56.901132       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:40:56.910200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:40:56.910549       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:40:56.910625       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:56.916862       1 config.go:200] "Starting service config controller"
	I1206 09:40:56.916898       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:40:56.916912       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:40:56.916916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:40:56.916926       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:40:56.916929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:40:56.918717       1 config.go:309] "Starting node config controller"
	I1206 09:40:56.918729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:40:56.918734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1206 09:40:56.919143       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.195:8441: connect: connection refused"
	E1206 09:41:06.427779       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]"
	I1206 09:41:06.517987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:16.917989       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:41:17.418051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [27cb54bb377297f85fd874a25db94789aca7a4095cbd4184870c7fce9e0dcd66] <==
	I1206 09:41:42.817226       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:41:42.823657       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.195:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.195:8441: connect: connection refused
	W1206 09:41:42.823750       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:41:42.823769       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:41:42.832420       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:41:42.832436       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:42.835185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:41:42.835306       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:42.835368       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:42.835534       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:50.324649       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:41:50.325177       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:41:50.325711       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:41:50.332323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:41:50.337052       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:41:50.337336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I1206 09:41:50.935781       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9f4729263a3b0d4ea91d2b994e07e1f3295d8db7192fee9b108c8f73e694abcd] <==
	I1206 09:40:47.309076       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:40:55.199596       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:40:55.199724       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:55.205483       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:40:55.205640       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 09:40:55.205653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.205669       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:40:55.208848       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:40:55.208881       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.208906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:40:55.209295       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.305768       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.308994       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.310391       1 shared_informer.go:377] "Caches are synced"
	E1206 09:41:06.338108       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1206 09:41:06.359751       1 reflector.go:204] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:41:06.427066       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1206 09:41:41.397323       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:41:41.397810       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:41:41.398141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:41.398167       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1206 09:41:41.398479       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:41:41.398664       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:41:41.398896       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:41:41.399017       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 09:48:01 functional-878866 kubelet[5363]: E1206 09:48:01.149062    5363 kuberuntime_manager.go:1664] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(ffe9435b-fd28-4a28-8bbe-994ce1895e67): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:48:01 functional-878866 kubelet[5363]: E1206 09:48:01.149116    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:48:04 functional-878866 kubelet[5363]: E1206 09:48:04.011712    5363 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:48:04 functional-878866 kubelet[5363]: E1206 09:48:04.011782    5363 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:48:04 functional-878866 kubelet[5363]: E1206 09:48:04.011964    5363 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-5758569b79-dxgxn_default(fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:48:04 functional-878866 kubelet[5363]: E1206 09:48:04.011996    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:48:08 functional-878866 kubelet[5363]: E1206 09:48:08.023752    5363 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:48:08 functional-878866 kubelet[5363]: E1206 09:48:08.023815    5363 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:48:08 functional-878866 kubelet[5363]: E1206 09:48:08.024490    5363 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-9f67c86d4-94flb_default(ccbb1545-75f9-4cb6-a66a-541dd12483f3): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:48:08 functional-878866 kubelet[5363]: E1206 09:48:08.024533    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:48:12 functional-878866 kubelet[5363]: E1206 09:48:12.091901    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" containerName="kubernetes-dashboard"
	Dec 06 09:48:12 functional-878866 kubelet[5363]: E1206 09:48:12.092815    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" containerName="dashboard-metrics-scraper"
	Dec 06 09:48:12 functional-878866 kubelet[5363]: E1206 09:48:12.095362    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" podUID="ca4a50bc-be9f-42d9-8667-c0c28149a805"
	Dec 06 09:48:13 functional-878866 kubelet[5363]: E1206 09:48:13.009242    5363 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:48:13 functional-878866 kubelet[5363]: E1206 09:48:13.009290    5363 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:48:13 functional-878866 kubelet[5363]: E1206 09:48:13.009459    5363 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-5565989548-pxs66_kubernetes-dashboard(bdd45c49-4f7d-4c58-bf71-55d765230fe9): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:48:13 functional-878866 kubelet[5363]: E1206 09:48:13.009493    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" podUID="bdd45c49-4f7d-4c58-bf71-55d765230fe9"
	Dec 06 09:48:16 functional-878866 kubelet[5363]: E1206 09:48:16.093996    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:48:17 functional-878866 kubelet[5363]: E1206 09:48:17.010850    5363 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 06 09:48:17 functional-878866 kubelet[5363]: E1206 09:48:17.010914    5363 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 06 09:48:17 functional-878866 kubelet[5363]: E1206 09:48:17.011091    5363 kuberuntime_manager.go:1664] "Unhandled Error" err="container mysql start failed in pod mysql-844cf969f6-x8f4x_default(c063e131-315a-4d95-80e7-6710bd46865b): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:48:17 functional-878866 kubelet[5363]: E1206 09:48:17.011167    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-x8f4x" podUID="c063e131-315a-4d95-80e7-6710bd46865b"
	Dec 06 09:48:17 functional-878866 kubelet[5363]: E1206 09:48:17.092398    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:48:20 functional-878866 kubelet[5363]: E1206 09:48:20.092914    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:48:24 functional-878866 kubelet[5363]: E1206 09:48:24.092511    5363 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9mcjc" containerName="coredns"
	
	
	==> storage-provisioner [113437169b1c37974e96925307abdbb55cfc82e538a9033341e8c43d02a37a4d] <==
	W1206 09:48:01.400437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:03.404327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:03.408896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:05.412912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:05.421664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:07.425054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:07.429654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:09.432768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:09.441111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:11.445492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:11.450648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:13.453904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:13.462132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:15.465121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:15.471777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:17.474650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:17.484464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:19.488364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:19.492507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:21.496077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:21.504287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:23.508183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:23.517008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:25.521741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:25.528147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dd7657ebf0fb090991f2e33ae757c5f56eeb3eaf14b5863f3aedd29af194ccd3] <==
	I1206 09:41:51.647942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:41:51.652108       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
helpers_test.go:269: (dbg) Run:  kubectl --context functional-878866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1 (98.293406ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  containerd://a655845f1732c2d1de014118f364a31fd9020fa965e1a3db715a9272b206b9f5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:42:18 +0000
	      Finished:     Sat, 06 Dec 2025 09:42:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxrxb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fxrxb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m9s  default-scheduler  Successfully assigned default/busybox-mount to functional-878866
	  Normal  Pulling    6m8s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m8s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 727ms (727ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    6m8s  kubelet            Container created
	  Normal  Started    6m8s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-dxgxn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k6dw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9k6dw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m11s                  default-scheduler  Successfully assigned default/hello-node-5758569b79-dxgxn to functional-878866
	  Warning  Failed     4m40s (x3 over 5m53s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m18s (x5 over 6m10s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m17s (x2 over 6m9s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m17s (x5 over 6m9s)   kubelet            Error: ErrImagePull
	  Warning  Failed     63s (x19 over 6m9s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    37s (x21 over 6m9s)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-94flb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nc2xk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nc2xk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m1s                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-94flb to functional-878866
	  Warning  Failed     5m22s                 kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m2s (x5 over 6m1s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m1s (x4 over 6m)     kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m1s (x5 over 6m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    55s (x21 over 5m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     55s (x21 over 5m59s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-844cf969f6-x8f4x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:28 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbjz9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lbjz9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m58s                 default-scheduler  Successfully assigned default/mysql-844cf969f6-x8f4x to functional-878866
	  Normal   Pulling    3m3s (x5 over 5m58s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m2s (x5 over 5m56s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m2s (x5 over 5m56s)  kubelet            Error: ErrImagePull
	  Warning  Failed     51s (x20 over 5m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    36s (x21 over 5m56s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzvcd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lzvcd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-878866
	  Normal   Pulling    3m15s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m14s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m14s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     52s (x20 over 6m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    40s (x21 over 6m)     kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-pxs66" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-vrq6x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (369.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-878866 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-x8f4x" [c063e131-315a-4d95-80e7-6710bd46865b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1206 09:42:48.982842  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:00.694820  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:00.701166  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:00.712476  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:00.733776  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:00.775098  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:00.856455  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:01.017937  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:01.340150  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:01.981760  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:03.264006  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:05.826102  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:10.948036  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:21.190017  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:41.671795  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:22.633661  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:45:44.555058  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:47:48.982333  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:48:00.694557  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-06 09:52:28.536414558 +0000 UTC m=+2521.823791691
functional_test.go:1804: (dbg) Run:  kubectl --context functional-878866 describe po mysql-844cf969f6-x8f4x -n default
functional_test.go:1804: (dbg) kubectl --context functional-878866 describe po mysql-844cf969f6-x8f4x -n default:
Name:             mysql-844cf969f6-x8f4x
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-878866/192.168.39.195
Start Time:       Sat, 06 Dec 2025 09:42:28 +0000
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbjz9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lbjz9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-844cf969f6-x8f4x to functional-878866
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-878866 logs mysql-844cf969f6-x8f4x -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-878866 logs mysql-844cf969f6-x8f4x -n default: exit status 1 (62.800609ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-844cf969f6-x8f4x" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-878866 logs mysql-844cf969f6-x8f4x -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-878866 -n functional-878866
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 logs -n 25: (1.222431669s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-878866 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ start          │ -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-878866 --alsologtostderr -v=1                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │                     │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/387687.pem                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /usr/share/ca-certificates/387687.pem                                                                              │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/3876872.pem                                                                                         │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /usr/share/ca-certificates/3876872.pem                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ ssh            │ functional-878866 ssh sudo cat /etc/test/nested/copy/387687/hosts                                                                                 │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:42 UTC │ 06 Dec 25 09:42 UTC │
	│ image          │ functional-878866 image ls --format short --alsologtostderr                                                                                       │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls --format yaml --alsologtostderr                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ ssh            │ functional-878866 ssh pgrep buildkitd                                                                                                             │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │                     │
	│ image          │ functional-878866 image build -t localhost/my-image:functional-878866 testdata/build --alsologtostderr                                            │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls                                                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls --format json --alsologtostderr                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ image          │ functional-878866 image ls --format table --alsologtostderr                                                                                       │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ update-context │ functional-878866 update-context --alsologtostderr -v=2                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ update-context │ functional-878866 update-context --alsologtostderr -v=2                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ update-context │ functional-878866 update-context --alsologtostderr -v=2                                                                                           │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:48 UTC │ 06 Dec 25 09:48 UTC │
	│ service        │ functional-878866 service list                                                                                                                    │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ service        │ functional-878866 service list -o json                                                                                                            │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │ 06 Dec 25 09:52 UTC │
	│ service        │ functional-878866 service --namespace=default --https --url hello-node                                                                            │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ service        │ functional-878866 service hello-node --url --format={{.IP}}                                                                                       │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	│ service        │ functional-878866 service hello-node --url                                                                                                        │ functional-878866 │ jenkins │ v1.37.0 │ 06 Dec 25 09:52 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:42:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:42:24.019904  403261 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:42:24.020138  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020149  403261 out.go:374] Setting ErrFile to fd 2...
	I1206 09:42:24.020155  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020466  403261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:42:24.020924  403261 out.go:368] Setting JSON to false
	I1206 09:42:24.021821  403261 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8694,"bootTime":1765005450,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:42:24.021904  403261 start.go:143] virtualization: kvm guest
	I1206 09:42:24.023333  403261 out.go:179] * [functional-878866] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:42:24.025176  403261 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:42:24.025162  403261 notify.go:221] Checking for updates...
	I1206 09:42:24.026373  403261 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:42:24.027496  403261 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:42:24.028692  403261 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:42:24.029895  403261 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:42:24.031067  403261 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:42:24.020670  403254 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.021222  403254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.060370  403254 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:42:24.061450  403254 start.go:309] selected driver: kvm2
	I1206 09:42:24.061471  403254 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.061626  403254 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.063197  403254 cni.go:84] Creating CNI manager for ""
	I1206 09:42:24.063287  403254 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:42:24.063349  403254 start.go:353] cluster config:
	{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.065378  403254 out.go:179] * dry-run validation complete!
	I1206 09:42:24.032603  403261 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.033150  403261 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.072141  403261 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:42:24.073419  403261 start.go:309] selected driver: kvm2
	I1206 09:42:24.073436  403261 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.073570  403261 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.075812  403261 out.go:203] 
	W1206 09:42:24.076830  403261 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:42:24.077921  403261 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a655845f1732c       56cc512116c8f       10 minutes ago      Exited              mount-munger              0                   58726284f6200       busybox-mount                               default
	113437169b1c3       6e38f40d628db       10 minutes ago      Running             storage-provisioner       4                   dbe75fc56a826       storage-provisioner                         kube-system
	bdb6875900eef       45f3cc72d235f       10 minutes ago      Running             kube-controller-manager   4                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	468bbd4cc840f       aa5e3ebc0dfed       10 minutes ago      Running             coredns                   2                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	242f635ee3d72       8a4ded35a3eb1       10 minutes ago      Running             kube-proxy                2                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	dd7657ebf0fb0       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       3                   dbe75fc56a826       storage-provisioner                         kube-system
	bd0d67905e6c1       aa9d02839d8de       10 minutes ago      Running             kube-apiserver            0                   17a7847a72c32       kube-apiserver-functional-878866            kube-system
	27cb54bb37729       7bb6219ddab95       10 minutes ago      Running             kube-scheduler            2                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	5c87b2b354447       a3e246e9556e9       10 minutes ago      Running             etcd                      2                   638d6d6fc7944       etcd-functional-878866                      kube-system
	301ce41fdfafe       45f3cc72d235f       10 minutes ago      Exited              kube-controller-manager   3                   1a39bbb70829f       kube-controller-manager-functional-878866   kube-system
	d0bf8f349bbc2       a3e246e9556e9       11 minutes ago      Exited              etcd                      1                   638d6d6fc7944       etcd-functional-878866                      kube-system
	9f4729263a3b0       7bb6219ddab95       11 minutes ago      Exited              kube-scheduler            1                   3fea2ff6933b4       kube-scheduler-functional-878866            kube-system
	8ab3cdb5a27f5       aa5e3ebc0dfed       11 minutes ago      Exited              coredns                   1                   ea0409f91e89e       coredns-7d764666f9-9mcjc                    kube-system
	fb6aa836f3149       8a4ded35a3eb1       11 minutes ago      Exited              kube-proxy                1                   eb63a271df8e5       kube-proxy-nv7xx                            kube-system
	
	
	==> containerd <==
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.092103262Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.094650671Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:07 functional-878866 containerd[4558]: time="2025-12-06T09:48:07.350367171Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:08 functional-878866 containerd[4558]: time="2025-12-06T09:48:08.023441140Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:08 functional-878866 containerd[4558]: time="2025-12-06T09:48:08.023617968Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.096615636Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.101341876Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:12 functional-878866 containerd[4558]: time="2025-12-06T09:48:12.348196563Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:13 functional-878866 containerd[4558]: time="2025-12-06T09:48:13.008714340Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:13 functional-878866 containerd[4558]: time="2025-12-06T09:48:13.008829513Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.094970883Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.099262110Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:16 functional-878866 containerd[4558]: time="2025-12-06T09:48:16.352867252Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Dec 06 09:48:17 functional-878866 containerd[4558]: time="2025-12-06T09:48:17.010465023Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:48:17 functional-878866 containerd[4558]: time="2025-12-06T09:48:17.010629820Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.558937531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.559046428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.559062221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.559758391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.687106280Z" level=info msg="shim disconnected" id=g0c8jtfcz5luk7zauxomyoodt namespace=k8s.io
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.687542771Z" level=warning msg="cleaning up after shim disconnected" id=g0c8jtfcz5luk7zauxomyoodt namespace=k8s.io
	Dec 06 09:48:28 functional-878866 containerd[4558]: time="2025-12-06T09:48:28.687602418Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 06 09:48:29 functional-878866 containerd[4558]: time="2025-12-06T09:48:29.010714179Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-878866\""
	Dec 06 09:48:29 functional-878866 containerd[4558]: time="2025-12-06T09:48:29.018489926Z" level=info msg="ImageCreate event name:\"sha256:e9c335317f52720f206b2db3c5f6d8c7fbed1726ccd36409ba6505bf0023fbb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 06 09:48:29 functional-878866 containerd[4558]: time="2025-12-06T09:48:29.021848809Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-878866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> coredns [468bbd4cc840f32413f20e1e8a02e5a4cb7382aedb4c8a5909efc9aab6bf840a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54038 - 4803 "HINFO IN 6012224714022320600.6695806507940078930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.498138699s
	
	
	==> coredns [8ab3cdb5a27f5110ee0a75f3126afadad9de822ca2418dec4c39630836e67768] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44124 - 54735 "HINFO IN 6652856036360049109.7864262208624346140. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079558221s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-878866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-878866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-878866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_39_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:39:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-878866
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:52:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:48:58 +0000   Sat, 06 Dec 2025 09:39:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    functional-878866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a7ede37b9346d29806749a6624cb26
	  System UUID:                d8a7ede3-7b93-46d2-9806-749a6624cb26
	  Boot ID:                    201bfa39-c6a0-473d-92c7-ea19f1cbce81
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-dxgxn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-94flb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-x8f4x                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-9mcjc                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-878866                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-878866              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-878866     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nv7xx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-878866              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-pxs66    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vrq6x          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                Age   From             Message
	  ----    ------                ----  ----             -------
	  Normal  RegisteredNode        12m   node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  CIDRAssignmentFailed  12m   cidrAllocator    Node functional-878866 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        11m   node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	  Normal  RegisteredNode        10m   node-controller  Node functional-878866 event: Registered Node functional-878866 in Controller
	
	
	==> dmesg <==
	[  +1.183064] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084534] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104220] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.120351] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.063268] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:40] kauditd_printk_skb: 276 callbacks suppressed
	[ +32.658143] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.884559] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.057660] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.250628] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.162312] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 6 09:41] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.117477] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.001336] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.232546] kauditd_printk_skb: 55 callbacks suppressed
	[  +3.667647] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 6 09:42] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.287607] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.861371] kauditd_printk_skb: 127 callbacks suppressed
	[  +0.000482] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.941950] kauditd_printk_skb: 194 callbacks suppressed
	[Dec 6 09:48] crun[8706]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.126578] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [5c87b2b35444796d5546b4df37410d5bf4723a21ef12d6bf8f413569ca270286] <==
	{"level":"warn","ts":"2025-12-06T09:41:49.440870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.448509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.456640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.464438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.472941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.485430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.495353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.511314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.524369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.531963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.539221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.547392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.556536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.568757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.578779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.586547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.594627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.602424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.613696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.620753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.628230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:49.703265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:51:49.043876Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1401}
	{"level":"info","ts":"2025-12-06T09:51:49.068804Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1401,"took":"23.821634ms","hash":946335887,"current-db-size-bytes":4071424,"current-db-size":"4.1 MB","current-db-size-in-use-bytes":2134016,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-06T09:51:49.068838Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":946335887,"revision":1401,"compact-revision":-1}
	
	
	==> etcd [d0bf8f349bbc290485d8210fdd2f3fb4eb8be97bd7679ccbac00714d84bff7cd] <==
	{"level":"warn","ts":"2025-12-06T09:41:05.651066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.667701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.680961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.685784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.697007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.706722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:41:05.756545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39140","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:41:41.304782Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:41:41.304851Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	{"level":"error","ts":"2025-12-06T09:41:41.304928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.304984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:41:41.306703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.306771Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"324857e3fe6e5c62"}
	{"level":"info","ts":"2025-12-06T09:41:41.306847Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T09:41:41.306875Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307236Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307312Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307640Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:41:41.307777Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:41:41.307788Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310487Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"error","ts":"2025-12-06T09:41:41.310547Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.195:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:41:41.310648Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2025-12-06T09:41:41.310666Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-878866","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	
	
	==> kernel <==
	 09:52:29 up 13 min,  0 users,  load average: 0.47, 0.37, 0.28
	Linux functional-878866 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [bd0d67905e6c1f2134865dda105fed233a0a83a241a0356900579bf523721f2d] <==
	I1206 09:41:50.417403       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:41:50.434798       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:41:50.465437       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:41:50.469440       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:41:50.473406       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:50.491599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:41:50.497599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:41:51.145164       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:41:51.275751       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1206 09:41:51.812960       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I1206 09:41:51.814646       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:41:51.820722       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:41:52.396418       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:41:52.443361       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:41:52.471343       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:41:52.482831       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:41:54.504347       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:42:11.434844       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.54.232"}
	I1206 09:42:15.521981       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.151.210"}
	I1206 09:42:25.195765       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:42:25.470176       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.252.61"}
	I1206 09:42:25.655391       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.187.143"}
	I1206 09:42:25.673856       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.202.77"}
	I1206 09:42:28.270311       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.110.238"}
	I1206 09:51:50.368752       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [301ce41fdfafed40e68e49ee314f81bbc53e16ad44cc043d94fd73225b9fecb3] <==
	I1206 09:41:41.658756       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:41:41.672446       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1206 09:41:41.672485       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:41.674118       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 09:41:41.674205       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 09:41:41.674208       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 09:41:41.674341       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:51.681697       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bdb6875900eefdb8a5d6f908a4e221a38a6d54ba97657137646fc200dba38a89] <==
	I1206 09:41:54.128709       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119713       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128530       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128535       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128541       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128545       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128972       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128978       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120273       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.128511       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.120289       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.119701       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.141422       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.216177       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221533       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:54.221643       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:41:54.221649       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1206 09:42:25.327816       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.333329       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.341834       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375193       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.375293       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.416821       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.422484       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:42:25.432393       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [242f635ee3d72904c3cbe96ad4ab51fe3b6e05ba970b68336d7568b6dc232a80] <==
	I1206 09:41:51.821997       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:51.925315       1 shared_informer.go:377] "Caches are synced"
	I1206 09:41:51.925336       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:41:51.925415       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:41:51.964879       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:41:51.964934       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:41:51.964953       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:41:51.974241       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:41:51.974520       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:41:51.974531       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:51.978887       1 config.go:200] "Starting service config controller"
	I1206 09:41:51.978913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:41:51.978938       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:41:51.978943       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:41:51.978952       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:41:51.978956       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:41:51.979291       1 config.go:309] "Starting node config controller"
	I1206 09:41:51.979296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:41:51.979308       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:41:52.079337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:52.079359       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:41:52.079396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fb6aa836f31492adc7d3b470fb9e656f39e0c5c2645af9fe9a1150d9e1c0e275] <==
	I1206 09:40:56.864442       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1206 09:40:56.864503       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:40:56.901030       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:40:56.901092       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:40:56.901132       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:40:56.910200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:40:56.910549       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:40:56.910625       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:56.916862       1 config.go:200] "Starting service config controller"
	I1206 09:40:56.916898       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:40:56.916912       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:40:56.916916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:40:56.916926       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:40:56.916929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:40:56.918717       1 config.go:309] "Starting node config controller"
	I1206 09:40:56.918729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:40:56.918734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1206 09:40:56.919143       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.195:8441: connect: connection refused"
	E1206 09:41:06.427779       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]"
	I1206 09:41:06.517987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:41:16.917989       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:41:17.418051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [27cb54bb377297f85fd874a25db94789aca7a4095cbd4184870c7fce9e0dcd66] <==
	I1206 09:41:42.817226       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:41:42.823657       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.195:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.195:8441: connect: connection refused
	W1206 09:41:42.823750       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:41:42.823769       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:41:42.832420       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:41:42.832436       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:41:42.835185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:41:42.835306       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:42.835368       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:41:42.835534       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:41:50.324649       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1206 09:41:50.325177       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1206 09:41:50.325711       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1206 09:41:50.332323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1206 09:41:50.337052       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1206 09:41:50.337336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I1206 09:41:50.935781       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9f4729263a3b0d4ea91d2b994e07e1f3295d8db7192fee9b108c8f73e694abcd] <==
	I1206 09:40:47.309076       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:40:55.199596       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:40:55.199724       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:40:55.205483       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:40:55.205640       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 09:40:55.205653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.205669       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:40:55.208848       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:40:55.208881       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.208906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:40:55.209295       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:40:55.305768       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.308994       1 shared_informer.go:377] "Caches are synced"
	I1206 09:40:55.310391       1 shared_informer.go:377] "Caches are synced"
	E1206 09:41:06.338108       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1206 09:41:06.359751       1 reflector.go:204] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1206 09:41:06.427066       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I1206 09:41:41.397323       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:41:41.397810       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:41:41.398141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:41:41.398167       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1206 09:41:41.398479       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:41:41.398664       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:41:41.398896       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:41:41.399017       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 09:51:55 functional-878866 kubelet[5363]: E1206 09:51:55.092210    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" containerName="dashboard-metrics-scraper"
	Dec 06 09:51:55 functional-878866 kubelet[5363]: E1206 09:51:55.094140    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" podUID="bdd45c49-4f7d-4c58-bf71-55d7
65230fe9"
	Dec 06 09:51:56 functional-878866 kubelet[5363]: E1206 09:51:56.094854    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:51:57 functional-878866 kubelet[5363]: E1206 09:51:57.092275    5363 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-878866" containerName="kube-scheduler"
	Dec 06 09:51:57 functional-878866 kubelet[5363]: E1206 09:51:57.092451    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:51:59 functional-878866 kubelet[5363]: E1206 09:51:59.092743    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:51:59 functional-878866 kubelet[5363]: E1206 09:51:59.093800    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-x8f4x" podUID="c063e131-315a-4d95-80e7-6710bd46865b"
	Dec 06 09:52:06 functional-878866 kubelet[5363]: E1206 09:52:06.091883    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" containerName="kubernetes-dashboard"
	Dec 06 09:52:06 functional-878866 kubelet[5363]: E1206 09:52:06.093344    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" podUID="ca4a50bc-be9f-42d9-8667-c0c28149a805"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.092412    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.092972    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.093199    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:52:09 functional-878866 kubelet[5363]: E1206 09:52:09.094856    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" podUID="bdd45c49-4f7d-4c58-bf71-55d7
65230fe9"
	Dec 06 09:52:12 functional-878866 kubelet[5363]: E1206 09:52:12.092753    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:52:13 functional-878866 kubelet[5363]: E1206 09:52:13.092788    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-x8f4x" podUID="c063e131-315a-4d95-80e7-6710bd46865b"
	Dec 06 09:52:17 functional-878866 kubelet[5363]: E1206 09:52:17.092002    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" containerName="kubernetes-dashboard"
	Dec 06 09:52:17 functional-878866 kubelet[5363]: E1206 09:52:17.093723    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-vrq6x" podUID="ca4a50bc-be9f-42d9-8667-c0c28149a805"
	Dec 06 09:52:20 functional-878866 kubelet[5363]: E1206 09:52:20.097672    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-94flb" podUID="ccbb1545-75f9-4cb6-a66a-541dd12483f3"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.092477    5363 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9mcjc" containerName="coredns"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.092916    5363 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" containerName="dashboard-metrics-scraper"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.093658    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ffe9435b-fd28-4a28-8bbe-994ce1895e67"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.094043    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-dxgxn" podUID="fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68"
	Dec 06 09:52:24 functional-878866 kubelet[5363]: E1206 09:52:24.095466    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-pxs66" podUID="bdd45c49-4f7d-4c58-bf71-55d7
65230fe9"
	Dec 06 09:52:27 functional-878866 kubelet[5363]: E1206 09:52:27.094347    5363 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-x8f4x" podUID="c063e131-315a-4d95-80e7-6710bd46865b"
	Dec 06 09:52:29 functional-878866 kubelet[5363]: E1206 09:52:29.092204    5363 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-878866" containerName="etcd"
	
	
	==> storage-provisioner [113437169b1c37974e96925307abdbb55cfc82e538a9033341e8c43d02a37a4d] <==
	W1206 09:52:04.641755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:06.645493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:06.650796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:08.654720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:08.662681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:10.666840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:10.672941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:12.675870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:12.683229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:14.685638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:14.690513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:16.693993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:16.700187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:18.704489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:18.711234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:20.714802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:20.724109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:22.727494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:22.735691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:24.738933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:24.743347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:26.747442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:26.759740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:28.766177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:52:28.774289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dd7657ebf0fb090991f2e33ae757c5f56eeb3eaf14b5863f3aedd29af194ccd3] <==
	I1206 09:41:51.647942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 09:41:51.652108       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
helpers_test.go:269: (dbg) Run:  kubectl --context functional-878866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1 (115.91092ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  containerd://a655845f1732c2d1de014118f364a31fd9020fa965e1a3db715a9272b206b9f5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:42:18 +0000
	      Finished:     Sat, 06 Dec 2025 09:42:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxrxb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fxrxb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-878866
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 727ms (727ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-dxgxn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k6dw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9k6dw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-dxgxn to functional-878866
	  Warning  Failed     8m44s (x3 over 9m57s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m22s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m21s (x2 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m21s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    6s (x41 over 10m)      kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     6s (x41 over 10m)      kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-94flb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nc2xk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nc2xk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-94flb to functional-878866
	  Warning  Failed     9m26s                 kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x4 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m5s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m59s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m59s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-844cf969f6-x8f4x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:28 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbjz9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lbjz9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-844cf969f6-x8f4x to functional-878866
	  Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-878866/192.168.39.195
	Start Time:       Sat, 06 Dec 2025 09:42:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzvcd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-lzvcd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/sp-pod to functional-878866
	  Normal   Pulling    7m19s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-pxs66" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-vrq6x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-878866 describe pod busybox-mount hello-node-5758569b79-dxgxn hello-node-connect-9f67c86d4-94flb mysql-844cf969f6-x8f4x sp-pod dashboard-metrics-scraper-5565989548-pxs66 kubernetes-dashboard-b84665fb8-vrq6x: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-878866 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-878866 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-dxgxn" [fd6758ef-50d5-47aa-8c0f-2ec71d1f4a68] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-878866 -n functional-878866
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-06 09:52:15.766852363 +0000 UTC m=+2509.054229486
functional_test.go:1460: (dbg) Run:  kubectl --context functional-878866 describe po hello-node-5758569b79-dxgxn -n default
functional_test.go:1460: (dbg) kubectl --context functional-878866 describe po hello-node-5758569b79-dxgxn -n default:
Name:             hello-node-5758569b79-dxgxn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-878866/192.168.39.195
Start Time:       Sat, 06 Dec 2025 09:42:15 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k6dw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9k6dw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-5758569b79-dxgxn to functional-878866
Warning  Failed     8m29s (x3 over 9m42s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m7s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x2 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m6s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x19 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m26s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-878866 logs hello-node-5758569b79-dxgxn -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-878866 logs hello-node-5758569b79-dxgxn -n default: exit status 1 (68.271312ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-dxgxn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-878866 logs hello-node-5758569b79-dxgxn -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 service --namespace=default --https --url hello-node: exit status 115 (235.325762ms)

                                                
                                                
-- stdout --
	https://192.168.39.195:30915
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-878866 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 service hello-node --url --format={{.IP}}: exit status 115 (238.409977ms)

                                                
                                                
-- stdout --
	192.168.39.195
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-878866 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 service hello-node --url: exit status 115 (237.558245ms)

                                                
                                                
-- stdout --
	http://192.168.39.195:30915
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-878866 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.195:30915
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                    

Test pass (372/437)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.67
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.18
12 TestDownloadOnly/v1.34.2/json-events 3.17
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.15
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.01
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.63
31 TestOffline 82.97
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 127.13
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.52
44 TestAddons/parallel/Registry 15.48
45 TestAddons/parallel/RegistryCreds 0.61
47 TestAddons/parallel/InspektorGadget 11.65
48 TestAddons/parallel/MetricsServer 6.76
51 TestAddons/parallel/Headlamp 18.02
52 TestAddons/parallel/CloudSpanner 6.67
54 TestAddons/parallel/NvidiaDevicePlugin 6.58
55 TestAddons/parallel/Yakd 11.81
57 TestAddons/StoppedEnableDisable 88.01
58 TestCertOptions 81.51
59 TestCertExpiration 297.01
61 TestForceSystemdFlag 76.19
62 TestForceSystemdEnv 62.54
67 TestErrorSpam/setup 37.31
68 TestErrorSpam/start 0.33
69 TestErrorSpam/status 0.66
70 TestErrorSpam/pause 1.46
71 TestErrorSpam/unpause 1.75
72 TestErrorSpam/stop 4.42
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 76.05
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 43.88
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.81
84 TestFunctional/serial/CacheCmd/cache/add_local 1.26
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.36
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 38.82
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.3
95 TestFunctional/serial/LogsFileCmd 1.2
96 TestFunctional/serial/InvalidService 4
98 TestFunctional/parallel/ConfigCmd 0.39
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.86
106 TestFunctional/parallel/ServiceCmdConnect 344.45
107 TestFunctional/parallel/AddonsCmd 0.15
110 TestFunctional/parallel/SSHCmd 0.36
111 TestFunctional/parallel/CpCmd 1.13
112 TestFunctional/parallel/MySQL 27.09
113 TestFunctional/parallel/FileSync 0.17
114 TestFunctional/parallel/CertSync 1.05
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
122 TestFunctional/parallel/License 0.44
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.19
124 TestFunctional/parallel/Version/short 0.06
125 TestFunctional/parallel/Version/components 0.42
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
130 TestFunctional/parallel/ImageCommands/ImageBuild 2.33
131 TestFunctional/parallel/ImageCommands/Setup 0.96
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.88
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
141 TestFunctional/parallel/ServiceCmd/List 0.28
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
145 TestFunctional/parallel/ServiceCmd/Format 0.28
146 TestFunctional/parallel/ServiceCmd/URL 0.3
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
148 TestFunctional/parallel/ProfileCmd/profile_list 0.35
149 TestFunctional/parallel/MountCmd/any-port 10.02
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
160 TestFunctional/parallel/MountCmd/specific-port 1.56
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.12
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 78.85
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 44.28
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.77
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.26
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.37
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 41.82
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.2
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.22
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.09
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.43
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.26
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.13
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.9
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.31
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.26
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.18
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.17
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.34
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.37
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 4.98
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.31
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.43
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.19
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.19
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.19
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.2
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.24
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.4
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.21
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.14
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.53
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.55
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.38
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.4
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.78
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.17
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.41
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.19
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.19
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 190.68
262 TestMultiControlPlane/serial/DeployApp 4.7
263 TestMultiControlPlane/serial/PingHostFromPods 1.32
264 TestMultiControlPlane/serial/AddWorkerNode 42.69
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
267 TestMultiControlPlane/serial/CopyFile 10.79
268 TestMultiControlPlane/serial/StopSecondaryNode 82.81
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
270 TestMultiControlPlane/serial/RestartSecondaryNode 27.64
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.78
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 387.97
273 TestMultiControlPlane/serial/DeleteSecondaryNode 6.56
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
275 TestMultiControlPlane/serial/StopCluster 255.46
276 TestMultiControlPlane/serial/RestartCluster 115.85
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
278 TestMultiControlPlane/serial/AddSecondaryNode 71.56
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
284 TestJSONOutput/start/Command 78.72
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.69
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.59
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.7
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 81.89
316 TestMountStart/serial/StartWithMountFirst 22.12
317 TestMountStart/serial/VerifyMountFirst 0.29
318 TestMountStart/serial/StartWithMountSecond 24.13
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.67
321 TestMountStart/serial/VerifyMountPostDelete 0.3
322 TestMountStart/serial/Stop 1.4
323 TestMountStart/serial/RestartStopped 18.3
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 98.5
328 TestMultiNode/serial/DeployApp2Nodes 3.78
329 TestMultiNode/serial/PingHostFrom2Pods 0.84
330 TestMultiNode/serial/AddNode 42.39
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.45
333 TestMultiNode/serial/CopyFile 5.95
334 TestMultiNode/serial/StopNode 2.04
335 TestMultiNode/serial/StartAfterStop 36.45
336 TestMultiNode/serial/RestartKeepsNodes 295.98
337 TestMultiNode/serial/DeleteNode 2
338 TestMultiNode/serial/StopMultiNode 170.31
339 TestMultiNode/serial/RestartMultiNode 76.76
340 TestMultiNode/serial/ValidateNameConflict 39.76
345 TestPreload 140.2
347 TestScheduledStopUnix 107.82
351 TestRunningBinaryUpgrade 147.25
353 TestKubernetesUpgrade 178.23
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 81.49
358 TestNoKubernetes/serial/StartWithStopK8s 28.12
366 TestNetworkPlugins/group/false 4.28
370 TestISOImage/Setup 24.73
371 TestNoKubernetes/serial/Start 33.26
373 TestISOImage/Binaries/crictl 0.19
374 TestISOImage/Binaries/curl 0.2
375 TestISOImage/Binaries/docker 0.19
376 TestISOImage/Binaries/git 0.2
377 TestISOImage/Binaries/iptables 0.2
378 TestISOImage/Binaries/podman 0.19
379 TestISOImage/Binaries/rsync 0.2
380 TestISOImage/Binaries/socat 0.21
381 TestISOImage/Binaries/wget 0.21
382 TestISOImage/Binaries/VBoxControl 0.2
383 TestISOImage/Binaries/VBoxService 0.21
384 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
385 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
386 TestNoKubernetes/serial/ProfileList 2.18
387 TestNoKubernetes/serial/Stop 1.49
388 TestNoKubernetes/serial/StartNoArgs 34.71
389 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
390 TestStoppedBinaryUpgrade/Setup 0.6
391 TestStoppedBinaryUpgrade/Upgrade 150.74
400 TestPause/serial/Start 123.52
401 TestNetworkPlugins/group/auto/Start 106.58
402 TestStoppedBinaryUpgrade/MinikubeLogs 1.4
403 TestNetworkPlugins/group/kindnet/Start 58.98
404 TestPause/serial/SecondStartNoReconfiguration 53.42
405 TestNetworkPlugins/group/auto/KubeletFlags 0.18
406 TestNetworkPlugins/group/auto/NetCatPod 8.25
407 TestNetworkPlugins/group/auto/DNS 0.16
408 TestNetworkPlugins/group/auto/Localhost 0.13
409 TestNetworkPlugins/group/auto/HairPin 0.12
410 TestNetworkPlugins/group/calico/Start 73.77
411 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
412 TestPause/serial/Pause 0.71
413 TestPause/serial/VerifyStatus 0.24
414 TestPause/serial/Unpause 0.62
415 TestPause/serial/PauseAgain 0.82
416 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
417 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
418 TestPause/serial/DeletePaused 0.88
419 TestPause/serial/VerifyDeletedResources 0.73
420 TestNetworkPlugins/group/custom-flannel/Start 81.27
421 TestNetworkPlugins/group/kindnet/DNS 0.15
422 TestNetworkPlugins/group/kindnet/Localhost 0.12
423 TestNetworkPlugins/group/kindnet/HairPin 0.12
424 TestNetworkPlugins/group/enable-default-cni/Start 66.18
425 TestNetworkPlugins/group/calico/ControllerPod 6.01
426 TestNetworkPlugins/group/calico/KubeletFlags 0.23
427 TestNetworkPlugins/group/calico/NetCatPod 10.4
428 TestNetworkPlugins/group/flannel/Start 70.96
429 TestNetworkPlugins/group/calico/DNS 0.19
430 TestNetworkPlugins/group/calico/Localhost 0.14
431 TestNetworkPlugins/group/calico/HairPin 0.17
432 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
433 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.32
434 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
435 TestNetworkPlugins/group/custom-flannel/DNS 0.19
436 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
437 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
438 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
439 TestNetworkPlugins/group/bridge/Start 85.52
440 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
441 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
442 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
444 TestStartStop/group/old-k8s-version/serial/FirstStart 104.89
446 TestStartStop/group/no-preload/serial/FirstStart 113.82
447 TestNetworkPlugins/group/flannel/ControllerPod 6.01
448 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
449 TestNetworkPlugins/group/flannel/NetCatPod 10.24
450 TestNetworkPlugins/group/flannel/DNS 0.17
451 TestNetworkPlugins/group/flannel/Localhost 0.15
452 TestNetworkPlugins/group/flannel/HairPin 0.12
454 TestStartStop/group/embed-certs/serial/FirstStart 87.32
455 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
456 TestNetworkPlugins/group/bridge/NetCatPod 10.24
457 TestNetworkPlugins/group/bridge/DNS 0.15
458 TestNetworkPlugins/group/bridge/Localhost 0.14
459 TestNetworkPlugins/group/bridge/HairPin 0.15
461 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.31
462 TestStartStop/group/old-k8s-version/serial/DeployApp 9.33
463 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
464 TestStartStop/group/old-k8s-version/serial/Stop 82.34
465 TestStartStop/group/no-preload/serial/DeployApp 7.31
466 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
467 TestStartStop/group/no-preload/serial/Stop 71.57
468 TestStartStop/group/embed-certs/serial/DeployApp 7.26
469 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
470 TestStartStop/group/embed-certs/serial/Stop 77.81
471 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.25
472 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
473 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.5
474 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
475 TestStartStop/group/old-k8s-version/serial/SecondStart 38.4
476 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
477 TestStartStop/group/no-preload/serial/SecondStart 54.85
478 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 18.01
479 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
480 TestStartStop/group/embed-certs/serial/SecondStart 45.52
481 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
482 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
483 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
484 TestStartStop/group/old-k8s-version/serial/Pause 2.82
485 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
487 TestStartStop/group/newest-cni/serial/FirstStart 43.33
488 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
489 TestStartStop/group/no-preload/serial/Pause 3.05
491 TestISOImage/PersistentMounts//data 0.18
492 TestISOImage/PersistentMounts//var/lib/docker 0.19
493 TestISOImage/PersistentMounts//var/lib/cni 0.19
494 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
495 TestISOImage/PersistentMounts//var/lib/minikube 0.21
496 TestISOImage/PersistentMounts//var/lib/toolbox 0.2
497 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
498 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
499 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.76
500 TestISOImage/VersionJSON 0.2
501 TestISOImage/eBPFSupport 0.18
502 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
503 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
504 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
505 TestStartStop/group/embed-certs/serial/Pause 2.93
506 TestStartStop/group/newest-cni/serial/DeployApp 0
507 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
508 TestStartStop/group/newest-cni/serial/Stop 2.53
509 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
510 TestStartStop/group/newest-cni/serial/SecondStart 32.15
511 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
512 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
513 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
514 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.68
515 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
516 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
517 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
518 TestStartStop/group/newest-cni/serial/Pause 2.33
x
+
TestDownloadOnly/v1.28.0/json-events (6.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-600827 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-600827 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.669101177s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 09:10:33.419279  387687 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1206 09:10:33.419362  387687 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-600827
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-600827: exit status 85 (78.177643ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-600827 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:26.803109  387699 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:26.803357  387699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:26.803366  387699 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:26.803369  387699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:26.803545  387699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	W1206 09:10:26.803653  387699 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22047-383742/.minikube/config/config.json: open /home/jenkins/minikube-integration/22047-383742/.minikube/config/config.json: no such file or directory
	I1206 09:10:26.804096  387699 out.go:368] Setting JSON to true
	I1206 09:10:26.804996  387699 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6777,"bootTime":1765005450,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:26.805046  387699 start.go:143] virtualization: kvm guest
	I1206 09:10:26.809421  387699 out.go:99] [download-only-600827] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1206 09:10:26.809543  387699 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 09:10:26.809557  387699 notify.go:221] Checking for updates...
	I1206 09:10:26.810809  387699 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:10:26.812066  387699 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:26.813295  387699 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:10:26.814466  387699 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:26.815651  387699 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:10:26.817760  387699 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:10:26.817976  387699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:10:26.847044  387699 out.go:99] Using the kvm2 driver based on user configuration
	I1206 09:10:26.847065  387699 start.go:309] selected driver: kvm2
	I1206 09:10:26.847071  387699 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:10:26.847342  387699 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:10:26.847793  387699 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1206 09:10:26.847958  387699 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:10:26.847988  387699 cni.go:84] Creating CNI manager for ""
	I1206 09:10:26.848031  387699 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1206 09:10:26.848040  387699 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:10:26.848081  387699 start.go:353] cluster config:
	{Name:download-only-600827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-600827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:10:26.848225  387699 iso.go:125] acquiring lock: {Name:mk1a7d442a240aa1785a2e6e751e007c5a8723f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:10:26.849631  387699 out.go:99] Downloading VM boot image ...
	I1206 09:10:26.849659  387699 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22047-383742/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 09:10:29.956573  387699 out.go:99] Starting "download-only-600827" primary control-plane node in "download-only-600827" cluster
	I1206 09:10:29.956623  387699 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1206 09:10:29.978017  387699 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1206 09:10:29.978037  387699 cache.go:65] Caching tarball of preloaded images
	I1206 09:10:29.978196  387699 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1206 09:10:29.979612  387699 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 09:10:29.979641  387699 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1206 09:10:30.002149  387699 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1206 09:10:30.002238  387699 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-600827 host does not exist
	  To start a cluster, run: "minikube start -p download-only-600827"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-600827
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-345944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-345944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.172517221s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 09:10:37.010202  387687 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1206 09:10:37.010244  387687 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-345944
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-345944: exit status 85 (71.882949ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-600827 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-600827                                                                                                                                                             │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ -o=json --download-only -p download-only-345944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:33.890693  387879 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:33.890800  387879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:33.890806  387879 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:33.890812  387879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:33.891044  387879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:10:33.891497  387879 out.go:368] Setting JSON to true
	I1206 09:10:33.892463  387879 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6784,"bootTime":1765005450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:33.892514  387879 start.go:143] virtualization: kvm guest
	I1206 09:10:33.898787  387879 out.go:99] [download-only-345944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:33.898973  387879 notify.go:221] Checking for updates...
	I1206 09:10:33.903759  387879 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:10:33.908201  387879 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:33.909552  387879 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:10:33.910791  387879 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:33.911825  387879 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-345944 host does not exist
	  To start a cluster, run: "minikube start -p download-only-345944"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-345944
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-802744 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-802744 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.012044858s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 09:10:40.382316  387687 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 09:10:40.382359  387687 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-802744
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-802744: exit status 85 (71.683605ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-600827 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd        │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-600827                                                                                                                                                                    │ download-only-600827 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ -o=json --download-only -p download-only-345944 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd        │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ delete  │ -p download-only-345944                                                                                                                                                                    │ download-only-345944 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │ 06 Dec 25 09:10 UTC │
	│ start   │ -o=json --download-only -p download-only-802744 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-802744 │ jenkins │ v1.37.0 │ 06 Dec 25 09:10 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:10:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:10:37.421447  388058 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:10:37.421555  388058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:37.421563  388058 out.go:374] Setting ErrFile to fd 2...
	I1206 09:10:37.421568  388058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:10:37.421760  388058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:10:37.422196  388058 out.go:368] Setting JSON to true
	I1206 09:10:37.423023  388058 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6787,"bootTime":1765005450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:10:37.423081  388058 start.go:143] virtualization: kvm guest
	I1206 09:10:37.424811  388058 out.go:99] [download-only-802744] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:10:37.425020  388058 notify.go:221] Checking for updates...
	I1206 09:10:37.426496  388058 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:10:37.427826  388058 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:10:37.429073  388058 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:10:37.430249  388058 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:10:37.431479  388058 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-802744 host does not exist
	  To start a cluster, run: "minikube start -p download-only-802744"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-802744
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 09:10:41.156704  387687 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-098159 --alsologtostderr --binary-mirror http://127.0.0.1:43773 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-098159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-098159
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (82.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-636443 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-636443 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (1m21.975532259s)
helpers_test.go:175: Cleaning up "offline-containerd-636443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-636443
--- PASS: TestOffline (82.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-269722
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-269722: exit status 85 (64.016023ms)

                                                
                                                
-- stdout --
	* Profile "addons-269722" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-269722"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-269722
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-269722: exit status 85 (64.054049ms)

                                                
                                                
-- stdout --
	* Profile "addons-269722" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-269722"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (127.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-269722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-269722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m7.126370327s)
--- PASS: TestAddons/Setup (127.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-269722 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-269722 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-269722 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-269722 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9994469c-a788-4710-89a3-fb2e1eebffcb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9994469c-a788-4710-89a3-fb2e1eebffcb] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004439387s
addons_test.go:694: (dbg) Run:  kubectl --context addons-269722 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-269722 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-269722 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.243059ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-rbbt6" [ec4e4a7f-6fd3-435d-bd23-ab587ffa45ba] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003044744s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-hbw67" [d47f2901-94d3-4e16-a0a8-5155e3f36879] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003055005s
addons_test.go:392: (dbg) Run:  kubectl --context addons-269722 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-269722 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-269722 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.737000391s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 ip
2025/12/06 09:19:34 [DEBUG] GET http://192.168.39.220:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.48s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.081779ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-269722
addons_test.go:332: (dbg) Run:  kubectl --context addons-269722 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-clpmt" [a8c7bf14-da77-4c84-ab79-02e2bb912b2c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003924635s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable inspektor-gadget --alsologtostderr -v=1: (5.642272175s)
--- PASS: TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.021167ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-h2jq2" [c453240d-89be-44da-9070-e49d7ebbc593] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003697202s
addons_test.go:463: (dbg) Run:  kubectl --context addons-269722 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-269722 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-269722 --alsologtostderr -v=1: (1.104824282s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-2h22h" [ed5df4eb-c3e7-42eb-964c-8ec906ad0923] Pending
helpers_test.go:352: "headlamp-dfcdc64b-2h22h" [ed5df4eb-c3e7-42eb-964c-8ec906ad0923] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-2h22h" [ed5df4eb-c3e7-42eb-964c-8ec906ad0923] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003997993s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable headlamp --alsologtostderr -v=1: (5.913378719s)
--- PASS: TestAddons/parallel/Headlamp (18.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-7m79k" [0584924d-145b-47f3-9c80-e22e59148461] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007884866s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.67s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-knqvl" [916799e0-a31e-4b9a-9acc-b02b72d66299] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003989099s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8bmdx" [f568bb27-238d-4900-9cd0-fc430be911cb] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00397746s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-269722 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-269722 addons disable yakd --alsologtostderr -v=1: (5.807331649s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-269722
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-269722: (1m27.813572482s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-269722
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-269722
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-269722
--- PASS: TestAddons/StoppedEnableDisable (88.01s)

                                                
                                    
x
+
TestCertOptions (81.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-675758 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-675758 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m19.587209258s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-675758 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-675758 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-675758 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-675758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-675758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-675758: (1.512018618s)
--- PASS: TestCertOptions (81.51s)

                                                
                                    
x
+
TestCertExpiration (297.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-704658 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-704658 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m2.941711292s)
E1206 10:37:48.985638  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:38:00.695289  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-704658 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-704658 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (53.130561733s)
helpers_test.go:175: Cleaning up "cert-expiration-704658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-704658
--- PASS: TestCertExpiration (297.01s)

                                                
                                    
x
+
TestForceSystemdFlag (76.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-325328 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-325328 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m15.093914459s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-325328 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-325328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-325328
--- PASS: TestForceSystemdFlag (76.19s)

                                                
                                    
x
+
TestForceSystemdEnv (62.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-939558 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-939558 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m1.574135384s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-939558 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-939558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-939558
--- PASS: TestForceSystemdEnv (62.54s)

                                                
                                    
x
+
TestErrorSpam/setup (37.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-165743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-165743 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-165743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-165743 --driver=kvm2  --container-runtime=containerd: (37.311472946s)
--- PASS: TestErrorSpam/setup (37.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (4.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 stop: (1.66112269s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 stop: (1.242289153s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-165743 --log_dir /tmp/nospam-165743 stop: (1.516600405s)
--- PASS: TestErrorSpam/stop (4.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-383742/.minikube/files/etc/test/nested/copy/387687/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-715379 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-715379 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m16.048958521s)
--- PASS: TestFunctional/serial/StartWithProxy (76.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 09:31:25.241766  387687 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-715379 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-715379 --alsologtostderr -v=8: (43.879996613s)
functional_test.go:678: soft start took 43.88083444s for "functional-715379" cluster.
I1206 09:32:09.122212  387687 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (43.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-715379 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 cache add registry.k8s.io/pause:3.3: (1.016576108s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-715379 /tmp/TestFunctionalserialCacheCmdcacheadd_local2185068608/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cache add minikube-local-cache-test:functional-715379
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cache delete minikube-local-cache-test:functional-715379
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-715379
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.505575ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 kubectl -- --context functional-715379 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-715379 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-715379 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 09:32:48.987837  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:48.994254  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:49.005586  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:49.026924  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:49.068250  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:49.149634  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:49.311117  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:49.632804  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:50.274882  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:51.556298  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:54.118198  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-715379 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.814969566s)
functional_test.go:776: restart took 38.815074548s for "functional-715379" cluster.
I1206 09:32:54.127602  387687 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (38.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-715379 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 logs: (1.304228974s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 logs --file /tmp/TestFunctionalserialLogsFileCmd3559606197/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 logs --file /tmp/TestFunctionalserialLogsFileCmd3559606197/001/logs.txt: (1.195186375s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-715379 apply -f testdata/invalidsvc.yaml
E1206 09:32:59.240243  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-715379
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-715379: exit status 115 (227.155775ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.160:31395 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-715379 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 config get cpus: exit status 14 (64.038578ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 config get cpus: exit status 14 (56.546474ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-715379 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-715379 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (110.547402ms)

                                                
                                                
-- stdout --
	* [functional-715379] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:33:10.477961  399144 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:33:10.478217  399144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.478226  399144 out.go:374] Setting ErrFile to fd 2...
	I1206 09:33:10.478230  399144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.478396  399144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:33:10.478780  399144 out.go:368] Setting JSON to false
	I1206 09:33:10.479714  399144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8140,"bootTime":1765005450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:33:10.479769  399144 start.go:143] virtualization: kvm guest
	I1206 09:33:10.482244  399144 out.go:179] * [functional-715379] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:33:10.483451  399144 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:33:10.483457  399144 notify.go:221] Checking for updates...
	I1206 09:33:10.484657  399144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:33:10.485933  399144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:33:10.487026  399144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:33:10.488133  399144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:33:10.489356  399144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:33:10.491003  399144 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:33:10.491629  399144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:33:10.521124  399144 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:33:10.522135  399144 start.go:309] selected driver: kvm2
	I1206 09:33:10.522153  399144 start.go:927] validating driver "kvm2" against &{Name:functional-715379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-715379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:33:10.522374  399144 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:33:10.525842  399144 out.go:203] 
	W1206 09:33:10.527114  399144 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:33:10.528245  399144 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-715379 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-715379 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-715379 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (121.765222ms)

                                                
                                                
-- stdout --
	* [functional-715379] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:33:10.718401  399186 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:33:10.718506  399186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.718515  399186 out.go:374] Setting ErrFile to fd 2...
	I1206 09:33:10.718522  399186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:33:10.718974  399186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:33:10.719531  399186 out.go:368] Setting JSON to false
	I1206 09:33:10.720768  399186 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8141,"bootTime":1765005450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:33:10.720836  399186 start.go:143] virtualization: kvm guest
	I1206 09:33:10.722601  399186 out.go:179] * [functional-715379] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:33:10.724144  399186 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:33:10.724163  399186 notify.go:221] Checking for updates...
	I1206 09:33:10.726252  399186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:33:10.727407  399186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:33:10.728380  399186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:33:10.729438  399186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:33:10.730564  399186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:33:10.732295  399186 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:33:10.732971  399186 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:33:10.766503  399186 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:33:10.768236  399186 start.go:309] selected driver: kvm2
	I1206 09:33:10.768258  399186 start.go:927] validating driver "kvm2" against &{Name:functional-715379 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-715379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:33:10.768384  399186 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:33:10.770464  399186 out.go:203] 
	W1206 09:33:10.771650  399186 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:33:10.772738  399186 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (344.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-715379 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-715379 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9trdj" [d6f4e72e-34af-40e7-a143-4075702d48de] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-9trdj" [d6f4e72e-34af-40e7-a143-4075702d48de] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 5m44.003319654s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.160:30257
functional_test.go:1680: http://192.168.39.160:30257: success! body:
Request served by hello-node-connect-7d85dfc575-9trdj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.160:30257
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (344.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh -n functional-715379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cp functional-715379:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2444161634/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh -n functional-715379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh -n functional-715379 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-715379 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-lv58m" [e264477e-e1eb-4048-b60b-c27b0e389820] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-lv58m" [e264477e-e1eb-4048-b60b-c27b0e389820] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003519053s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;": exit status 1 (161.944763ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:33:22.484331  387687 retry.go:31] will retry after 1.117772797s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;": exit status 1 (198.757269ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:33:23.801853  387687 retry.go:31] will retry after 1.294386388s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;": exit status 1 (118.608938ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:33:25.215718  387687 retry.go:31] will retry after 1.41996144s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;": exit status 1 (110.73638ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:33:26.747268  387687 retry.go:31] will retry after 3.366909591s: exit status 1
E1206 09:33:29.964344  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-715379 exec mysql-5bb876957f-lv58m -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/387687/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo cat /etc/test/nested/copy/387687/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/387687.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo cat /etc/ssl/certs/387687.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/387687.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo cat /usr/share/ca-certificates/387687.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3876872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo cat /etc/ssl/certs/3876872.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3876872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo cat /usr/share/ca-certificates/3876872.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-715379 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh "sudo systemctl is-active docker": exit status 1 (172.378235ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh "sudo systemctl is-active crio": exit status 1 (170.046061ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-715379 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-715379 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-pwg8d" [f952cb6a-c6a9-413b-982f-ea8dca858eac] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-pwg8d" [f952cb6a-c6a9-413b-982f-ea8dca858eac] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004317697s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-715379 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-715379
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-715379
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-715379 image ls --format short --alsologtostderr:
I1206 09:33:30.969759  399710 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:30.970021  399710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:30.970032  399710 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:30.970036  399710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:30.970307  399710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:33:30.970893  399710 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:30.970995  399710 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:30.973580  399710 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:30.975500  399710 main.go:143] libmachine: domain functional-715379 has defined MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:30.975845  399710 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:93:ff", ip: ""} in network mk-functional-715379: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:24 +0000 UTC Type:0 Mac:52:54:00:9c:93:ff Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:functional-715379 Clientid:01:52:54:00:9c:93:ff}
I1206 09:33:30.975881  399710 main.go:143] libmachine: domain functional-715379 has defined IP address 192.168.39.160 and MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:30.975995  399710 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-715379/id_rsa Username:docker}
I1206 09:33:31.061875  399710 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-715379 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:a5f569 │ 27.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:88320b │ 17.4MB │
│ docker.io/library/minikube-local-cache-test │ functional-715379  │ sha256:ce8d52 │ 992B   │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:01e8ba │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-715379  │ sha256:9056ab │ 2.37MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:8aa150 │ 26MB   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-715379 image ls --format table --alsologtostderr:
I1206 09:33:31.345181  399732 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:31.345411  399732 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.345419  399732 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:31.345423  399732 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.345612  399732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:33:31.346252  399732 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.346409  399732 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.348521  399732 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:31.350659  399732 main.go:143] libmachine: domain functional-715379 has defined MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.351045  399732 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:93:ff", ip: ""} in network mk-functional-715379: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:24 +0000 UTC Type:0 Mac:52:54:00:9c:93:ff Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:functional-715379 Clientid:01:52:54:00:9c:93:ff}
I1206 09:33:31.351071  399732 main.go:143] libmachine: domain functional-715379 has defined IP address 192.168.39.160 and MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.351240  399732 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-715379/id_rsa Username:docker}
I1206 09:33:31.435330  399732 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-715379 image ls --format json --alsologtostderr:
[{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"22818657"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:a3e246e9556e93d71e2850085
ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-715379"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["regis
try.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"27060130"},{"id":"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"17382272"},{"id":"sha256:ce8d5202ca525f2d9f98db65e2958171335fd3011ce0ddd2057443b4b76fb7f6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-715379"],"size":"992"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-miniku
be/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"25963482"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-715379 image ls --format json --alsologtostderr:
I1206 09:33:31.159467  399721 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:31.159709  399721 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.159716  399721 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:31.159720  399721 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.159935  399721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:33:31.160444  399721 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.160545  399721 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.162886  399721 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:31.165480  399721 main.go:143] libmachine: domain functional-715379 has defined MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.165985  399721 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:93:ff", ip: ""} in network mk-functional-715379: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:24 +0000 UTC Type:0 Mac:52:54:00:9c:93:ff Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:functional-715379 Clientid:01:52:54:00:9c:93:ff}
I1206 09:33:31.166018  399721 main.go:143] libmachine: domain functional-715379 has defined IP address 192.168.39.160 and MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.166204  399721 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-715379/id_rsa Username:docker}
I1206 09:33:31.249665  399721 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-715379 image ls --format yaml --alsologtostderr:
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "27060130"
- id: sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "22818657"
- id: sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "17382272"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-715379
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:ce8d5202ca525f2d9f98db65e2958171335fd3011ce0ddd2057443b4b76fb7f6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-715379
size: "992"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "25963482"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-715379 image ls --format yaml --alsologtostderr:
I1206 09:33:31.544780  399743 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:31.545037  399743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.545045  399743 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:31.545049  399743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.545220  399743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:33:31.545747  399743 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.545839  399743 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.547940  399743 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:31.550205  399743 main.go:143] libmachine: domain functional-715379 has defined MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.550604  399743 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:93:ff", ip: ""} in network mk-functional-715379: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:24 +0000 UTC Type:0 Mac:52:54:00:9c:93:ff Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:functional-715379 Clientid:01:52:54:00:9c:93:ff}
I1206 09:33:31.550629  399743 main.go:143] libmachine: domain functional-715379 has defined IP address 192.168.39.160 and MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.550759  399743 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-715379/id_rsa Username:docker}
I1206 09:33:31.635587  399743 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh pgrep buildkitd: exit status 1 (160.038455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image build -t localhost/my-image:functional-715379 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 image build -t localhost/my-image:functional-715379 testdata/build --alsologtostderr: (1.976932946s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-715379 image build -t localhost/my-image:functional-715379 testdata/build --alsologtostderr:
I1206 09:33:31.890767  399765 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:31.890894  399765 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.890906  399765 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:31.890911  399765 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:31.891081  399765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:33:31.891611  399765 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.892321  399765 config.go:182] Loaded profile config "functional-715379": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1206 09:33:31.894283  399765 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:31.896244  399765 main.go:143] libmachine: domain functional-715379 has defined MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.896621  399765 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:93:ff", ip: ""} in network mk-functional-715379: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:24 +0000 UTC Type:0 Mac:52:54:00:9c:93:ff Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:functional-715379 Clientid:01:52:54:00:9c:93:ff}
I1206 09:33:31.896647  399765 main.go:143] libmachine: domain functional-715379 has defined IP address 192.168.39.160 and MAC address 52:54:00:9c:93:ff in network mk-functional-715379
I1206 09:33:31.896767  399765 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-715379/id_rsa Username:docker}
I1206 09:33:31.982054  399765 build_images.go:162] Building image from path: /tmp/build.4076176106.tar
I1206 09:33:31.982140  399765 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:33:31.994557  399765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4076176106.tar
I1206 09:33:32.000547  399765 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4076176106.tar: stat -c "%s %y" /var/lib/minikube/build/build.4076176106.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4076176106.tar': No such file or directory
I1206 09:33:32.000581  399765 ssh_runner.go:362] scp /tmp/build.4076176106.tar --> /var/lib/minikube/build/build.4076176106.tar (3072 bytes)
I1206 09:33:32.031239  399765 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4076176106
I1206 09:33:32.043356  399765 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4076176106 -xf /var/lib/minikube/build/build.4076176106.tar
I1206 09:33:32.054147  399765 containerd.go:394] Building image: /var/lib/minikube/build/build.4076176106
I1206 09:33:32.054218  399765 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4076176106 --local dockerfile=/var/lib/minikube/build/build.4076176106 --output type=image,name=localhost/my-image:functional-715379
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d994d01ab97b5ad2df6ccd4f2d57344933efcb15d3368011ce6c34d33b1b6850
#8 exporting manifest sha256:d994d01ab97b5ad2df6ccd4f2d57344933efcb15d3368011ce6c34d33b1b6850 0.0s done
#8 exporting config sha256:59cab14becbc58a411cb1d362f2155566524bfbd83a5fba1d8076e94a1b14126 0.0s done
#8 naming to localhost/my-image:functional-715379 done
#8 DONE 0.2s
I1206 09:33:33.779024  399765 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4076176106 --local dockerfile=/var/lib/minikube/build/build.4076176106 --output type=image,name=localhost/my-image:functional-715379: (1.724775072s)
I1206 09:33:33.779096  399765 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4076176106
I1206 09:33:33.796481  399765 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4076176106.tar
I1206 09:33:33.807948  399765 build_images.go:218] Built localhost/my-image:functional-715379 from /tmp/build.4076176106.tar
I1206 09:33:33.807982  399765 build_images.go:134] succeeded building to: functional-715379
I1206 09:33:33.807989  399765 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls
E1206 09:34:10.926704  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:35:32.848610  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:37:48.982755  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:38:16.690749  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-715379
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image load --daemon kicbase/echo-server:functional-715379 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 image load --daemon kicbase/echo-server:functional-715379 --alsologtostderr: (1.021842492s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image load --daemon kicbase/echo-server:functional-715379 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-715379
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image load --daemon kicbase/echo-server:functional-715379 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-715379 image load --daemon kicbase/echo-server:functional-715379 --alsologtostderr: (1.257750517s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image save kicbase/echo-server:functional-715379 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image rm kicbase/echo-server:functional-715379 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 service list -o json
functional_test.go:1504: Took "284.66755ms" to run "out/minikube-linux-amd64 -p functional-715379 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-715379
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 image save --daemon kicbase/echo-server:functional-715379 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-715379
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.160:32460
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.160:32460
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1206 09:33:09.482639  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "281.240315ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.752095ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdany-port2536948829/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765013589833940073" to /tmp/TestFunctionalparallelMountCmdany-port2536948829/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765013589833940073" to /tmp/TestFunctionalparallelMountCmdany-port2536948829/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765013589833940073" to /tmp/TestFunctionalparallelMountCmdany-port2536948829/001/test-1765013589833940073
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.446559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:33:10.041749  387687 retry.go:31] will retry after 312.219018ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:33 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:33 test-1765013589833940073
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh cat /mount-9p/test-1765013589833940073
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-715379 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c91b51eb-6c51-4523-b803-0ba48399cc49] Pending
helpers_test.go:352: "busybox-mount" [c91b51eb-6c51-4523-b803-0ba48399cc49] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c91b51eb-6c51-4523-b803-0ba48399cc49] Running
helpers_test.go:352: "busybox-mount" [c91b51eb-6c51-4523-b803-0ba48399cc49] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c91b51eb-6c51-4523-b803-0ba48399cc49] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.005400361s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-715379 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdany-port2536948829/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "255.383307ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.669548ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdspecific-port61418453/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.277144ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:33:20.013307  387687 retry.go:31] will retry after 717.318355ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdspecific-port61418453/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh "sudo umount -f /mount-9p": exit status 1 (165.04612ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-715379 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdspecific-port61418453/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T" /mount1: exit status 1 (188.288769ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:33:21.609185  387687 retry.go:31] will retry after 382.925426ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-715379 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-715379 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-715379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2671421261/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-715379
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-715379
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-715379
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-383742/.minikube/files/etc/test/nested/copy/387687/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (78.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-878866 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-878866 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (1m18.853849612s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (78.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (44.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 09:40:36.479707  387687 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-878866 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-878866 --alsologtostderr -v=8: (44.280394409s)
functional_test.go:678: soft start took 44.280805087s for "functional-878866" cluster.
I1206 09:41:20.760458  387687 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (44.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-878866 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2799092374/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cache add minikube-local-cache-test:functional-878866
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cache delete minikube-local-cache-test:functional-878866
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-878866
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (172.270318ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 kubectl -- --context functional-878866 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-878866 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (41.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-878866 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-878866 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.817725105s)
functional_test.go:776: restart took 41.81784541s for "functional-878866" cluster.
I1206 09:42:08.763619  387687 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (41.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-878866 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 logs: (1.198141574s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs813202770/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs813202770/001/logs.txt: (1.22216741s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-878866 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-878866
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-878866: exit status 115 (224.603795ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.195:32217 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-878866 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 config get cpus: exit status 14 (66.27518ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 config get cpus: exit status 14 (65.951542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (127.686975ms)

                                                
                                                
-- stdout --
	* [functional-878866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:42:23.865388  403209 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:42:23.865484  403209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:23.865495  403209 out.go:374] Setting ErrFile to fd 2...
	I1206 09:42:23.865500  403209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:23.865727  403209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:42:23.866216  403209 out.go:368] Setting JSON to false
	I1206 09:42:23.867174  403209 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8694,"bootTime":1765005450,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:42:23.867226  403209 start.go:143] virtualization: kvm guest
	I1206 09:42:23.869227  403209 out.go:179] * [functional-878866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:42:23.870394  403209 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:42:23.870403  403209 notify.go:221] Checking for updates...
	I1206 09:42:23.873051  403209 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:42:23.874756  403209 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:42:23.876064  403209 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:42:23.877137  403209 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:42:23.878172  403209 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:42:23.879560  403209 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:23.880293  403209 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:23.918232  403209 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:42:23.919418  403209 start.go:309] selected driver: kvm2
	I1206 09:42:23.919442  403209 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:23.919559  403209 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:23.924789  403209 out.go:203] 
	W1206 09:42:23.925993  403209 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:42:23.930438  403209 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-878866 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
I1206 09:42:24.046600  387687 detect.go:223] nested VM detected
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-878866 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (131.82384ms)

                                                
                                                
-- stdout --
	* [functional-878866] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:42:24.019904  403261 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:42:24.020138  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020149  403261 out.go:374] Setting ErrFile to fd 2...
	I1206 09:42:24.020155  403261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:42:24.020466  403261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:42:24.020924  403261 out.go:368] Setting JSON to false
	I1206 09:42:24.021821  403261 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8694,"bootTime":1765005450,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:42:24.021904  403261 start.go:143] virtualization: kvm guest
	I1206 09:42:24.023333  403261 out.go:179] * [functional-878866] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:42:24.025176  403261 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:42:24.025162  403261 notify.go:221] Checking for updates...
	I1206 09:42:24.026373  403261 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:42:24.027496  403261 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 09:42:24.028692  403261 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 09:42:24.029895  403261 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:42:24.031067  403261 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:42:24.032603  403261 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 09:42:24.033150  403261 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:42:24.072141  403261 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:42:24.073419  403261 start.go:309] selected driver: kvm2
	I1206 09:42:24.073436  403261 start.go:927] validating driver "kvm2" against &{Name:functional-878866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-878866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:42:24.073570  403261 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:42:24.075812  403261 out.go:203] 
	W1206 09:42:24.076830  403261 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:42:24.077921  403261 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh -n functional-878866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cp functional-878866:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp999325647/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh -n functional-878866 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh -n functional-878866 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/387687/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo cat /etc/test/nested/copy/387687/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/387687.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo cat /etc/ssl/certs/387687.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/387687.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo cat /usr/share/ca-certificates/387687.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3876872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo cat /etc/ssl/certs/3876872.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3876872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo cat /usr/share/ca-certificates/3876872.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-878866 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh "sudo systemctl is-active docker": exit status 1 (166.473246ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh "sudo systemctl is-active crio": exit status 1 (176.130222ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "300.319836ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.896289ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (4.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3691570444/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765014136250306892" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3691570444/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765014136250306892" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3691570444/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765014136250306892" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3691570444/001/test-1765014136250306892
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.799289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:42:16.423423  387687 retry.go:31] will retry after 463.994805ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:42 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:42 test-1765014136250306892
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh cat /mount-9p/test-1765014136250306892
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-878866 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5b258008-690e-46ac-95e9-db4745241e5c] Pending
helpers_test.go:352: "busybox-mount" [5b258008-690e-46ac-95e9-db4745241e5c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [5b258008-690e-46ac-95e9-db4745241e5c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5b258008-690e-46ac-95e9-db4745241e5c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003026066s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-878866 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3691570444/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (4.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "250.772966ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.805308ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-878866 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-878866
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-878866
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-878866 image ls --format short --alsologtostderr:
I1206 09:48:26.814314  404766 out.go:360] Setting OutFile to fd 1 ...
I1206 09:48:26.814399  404766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:26.814403  404766 out.go:374] Setting ErrFile to fd 2...
I1206 09:48:26.814406  404766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:26.814588  404766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:48:26.815147  404766 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:26.815246  404766 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:26.817179  404766 ssh_runner.go:195] Run: systemctl --version
I1206 09:48:26.819198  404766 main.go:143] libmachine: domain functional-878866 has defined MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:26.819581  404766 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:ce:27", ip: ""} in network mk-functional-878866: {Iface:virbr1 ExpiryTime:2025-12-06 10:39:33 +0000 UTC Type:0 Mac:52:54:00:06:ce:27 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-878866 Clientid:01:52:54:00:06:ce:27}
I1206 09:48:26.819606  404766 main.go:143] libmachine: domain functional-878866 has defined IP address 192.168.39.195 and MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:26.819726  404766 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-878866/id_rsa Username:docker}
I1206 09:48:26.903017  404766 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-878866 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kicbase/echo-server               │ functional-878866  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-878866  │ sha256:ce8d52 │ 992B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:aa9d02 │ 27.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:45f3cc │ 23.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:7bb621 │ 17.2MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:8a4ded │ 25.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ localhost/my-image                          │ functional-878866  │ sha256:e9c335 │ 775kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:aa5e3e │ 23.6MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-878866 image ls --format table --alsologtostderr:
I1206 09:48:29.630970  404830 out.go:360] Setting OutFile to fd 1 ...
I1206 09:48:29.631100  404830 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:29.631111  404830 out.go:374] Setting ErrFile to fd 2...
I1206 09:48:29.631117  404830 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:29.631324  404830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:48:29.631877  404830 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:29.632002  404830 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:29.633940  404830 ssh_runner.go:195] Run: systemctl --version
I1206 09:48:29.635918  404830 main.go:143] libmachine: domain functional-878866 has defined MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:29.636322  404830 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:ce:27", ip: ""} in network mk-functional-878866: {Iface:virbr1 ExpiryTime:2025-12-06 10:39:33 +0000 UTC Type:0 Mac:52:54:00:06:ce:27 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-878866 Clientid:01:52:54:00:06:ce:27}
I1206 09:48:29.636359  404830 main.go:143] libmachine: domain functional-878866 has defined IP address 192.168.39.195 and MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:29.636483  404830 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-878866/id_rsa Username:docker}
I1206 09:48:29.718797  404830 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-878866 image ls --format json --alsologtostderr:
[{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:e9c335317f52720f206b2db3c5f6d8c7fbed1726ccd36409ba6505bf0023fbb2","repoDigests":[],"repoTags":["localhost/my-image:functional-878866"],"size":"774887"},{"id":"sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"25786942"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-
server:functional-878866"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"23121143"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:ce8d5202ca525f2d9f98db65e2958171335fd3011ce0ddd2057443b4b76fb7f6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-878866"],"size":"992"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3
bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"27671920"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23553139"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{
"id":"sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"17228488"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-878866 image ls --format json --alsologtostderr:
I1206 09:48:29.440104  404819 out.go:360] Setting OutFile to fd 1 ...
I1206 09:48:29.440225  404819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:29.440235  404819 out.go:374] Setting ErrFile to fd 2...
I1206 09:48:29.440239  404819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:29.440443  404819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:48:29.440975  404819 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:29.441070  404819 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:29.442989  404819 ssh_runner.go:195] Run: systemctl --version
I1206 09:48:29.444916  404819 main.go:143] libmachine: domain functional-878866 has defined MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:29.445234  404819 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:ce:27", ip: ""} in network mk-functional-878866: {Iface:virbr1 ExpiryTime:2025-12-06 10:39:33 +0000 UTC Type:0 Mac:52:54:00:06:ce:27 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-878866 Clientid:01:52:54:00:06:ce:27}
I1206 09:48:29.445252  404819 main.go:143] libmachine: domain functional-878866 has defined IP address 192.168.39.195 and MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:29.445367  404819 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-878866/id_rsa Username:docker}
I1206 09:48:29.525531  404819 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-878866 image ls --format yaml --alsologtostderr:
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "25786942"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-878866
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "23121143"
- id: sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "17228488"
- id: sha256:ce8d5202ca525f2d9f98db65e2958171335fd3011ce0ddd2057443b4b76fb7f6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-878866
size: "992"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23553139"
- id: sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "27671920"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-878866 image ls --format yaml --alsologtostderr:
I1206 09:48:27.003909  404776 out.go:360] Setting OutFile to fd 1 ...
I1206 09:48:27.004018  404776 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:27.004028  404776 out.go:374] Setting ErrFile to fd 2...
I1206 09:48:27.004032  404776 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:27.004251  404776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:48:27.004809  404776 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:27.004933  404776 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:27.007029  404776 ssh_runner.go:195] Run: systemctl --version
I1206 09:48:27.008993  404776 main.go:143] libmachine: domain functional-878866 has defined MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:27.009391  404776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:ce:27", ip: ""} in network mk-functional-878866: {Iface:virbr1 ExpiryTime:2025-12-06 10:39:33 +0000 UTC Type:0 Mac:52:54:00:06:ce:27 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-878866 Clientid:01:52:54:00:06:ce:27}
I1206 09:48:27.009416  404776 main.go:143] libmachine: domain functional-878866 has defined IP address 192.168.39.195 and MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:27.009597  404776 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-878866/id_rsa Username:docker}
I1206 09:48:27.091663  404776 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh pgrep buildkitd: exit status 1 (151.897079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image build -t localhost/my-image:functional-878866 testdata/build --alsologtostderr
E1206 09:48:28.396904  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 image build -t localhost/my-image:functional-878866 testdata/build --alsologtostderr: (1.903502427s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-878866 image build -t localhost/my-image:functional-878866 testdata/build --alsologtostderr:
I1206 09:48:27.353368  404798 out.go:360] Setting OutFile to fd 1 ...
I1206 09:48:27.353466  404798 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:27.353474  404798 out.go:374] Setting ErrFile to fd 2...
I1206 09:48:27.353478  404798 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:48:27.353653  404798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
I1206 09:48:27.354226  404798 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:27.354840  404798 config.go:182] Loaded profile config "functional-878866": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 09:48:27.356890  404798 ssh_runner.go:195] Run: systemctl --version
I1206 09:48:27.359069  404798 main.go:143] libmachine: domain functional-878866 has defined MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:27.359457  404798 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:ce:27", ip: ""} in network mk-functional-878866: {Iface:virbr1 ExpiryTime:2025-12-06 10:39:33 +0000 UTC Type:0 Mac:52:54:00:06:ce:27 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-878866 Clientid:01:52:54:00:06:ce:27}
I1206 09:48:27.359482  404798 main.go:143] libmachine: domain functional-878866 has defined IP address 192.168.39.195 and MAC address 52:54:00:06:ce:27 in network mk-functional-878866
I1206 09:48:27.359671  404798 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/functional-878866/id_rsa Username:docker}
I1206 09:48:27.439691  404798 build_images.go:162] Building image from path: /tmp/build.397931500.tar
I1206 09:48:27.439755  404798 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:48:27.452690  404798 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.397931500.tar
I1206 09:48:27.459220  404798 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.397931500.tar: stat -c "%s %y" /var/lib/minikube/build/build.397931500.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.397931500.tar': No such file or directory
I1206 09:48:27.459257  404798 ssh_runner.go:362] scp /tmp/build.397931500.tar --> /var/lib/minikube/build/build.397931500.tar (3072 bytes)
I1206 09:48:27.488817  404798 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.397931500
I1206 09:48:27.502980  404798 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.397931500 -xf /var/lib/minikube/build/build.397931500.tar
I1206 09:48:27.514735  404798 containerd.go:394] Building image: /var/lib/minikube/build/build.397931500
I1206 09:48:27.514827  404798 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.397931500 --local dockerfile=/var/lib/minikube/build/build.397931500 --output type=image,name=localhost/my-image:functional-878866
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:cc723367df2f52a237cfc985ae91de2f1aa5f4c79995e0c47d13937a81af2401
#8 exporting manifest sha256:cc723367df2f52a237cfc985ae91de2f1aa5f4c79995e0c47d13937a81af2401 0.0s done
#8 exporting config sha256:e9c335317f52720f206b2db3c5f6d8c7fbed1726ccd36409ba6505bf0023fbb2 0.0s done
#8 naming to localhost/my-image:functional-878866 done
#8 DONE 0.2s
I1206 09:48:29.153378  404798 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.397931500 --local dockerfile=/var/lib/minikube/build/build.397931500 --output type=image,name=localhost/my-image:functional-878866: (1.63850994s)
I1206 09:48:29.153467  404798 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.397931500
I1206 09:48:29.172484  404798 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.397931500.tar
I1206 09:48:29.191112  404798 build_images.go:218] Built localhost/my-image:functional-878866 from /tmp/build.397931500.tar
I1206 09:48:29.191166  404798 build_images.go:134] succeeded building to: functional-878866
I1206 09:48:29.191175  404798 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-878866
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image load --daemon kicbase/echo-server:functional-878866 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 image load --daemon kicbase/echo-server:functional-878866 --alsologtostderr: (1.033096237s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image load --daemon kicbase/echo-server:functional-878866 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-878866
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image load --daemon kicbase/echo-server:functional-878866 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo742427201/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (169.378553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:42:21.401839  387687 retry.go:31] will retry after 612.847832ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo742427201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh "sudo umount -f /mount-9p": exit status 1 (202.502296ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-878866 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo742427201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image save kicbase/echo-server:functional-878866 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image rm kicbase/echo-server:functional-878866 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T" /mount1: exit status 1 (181.383898ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:42:22.960446  387687 retry.go:31] will retry after 423.113782ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-878866 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-878866 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo82201912/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-878866
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 image save --daemon kicbase/echo-server:functional-878866 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-878866
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 update-context --alsologtostderr -v=2
E1206 09:49:12.052147  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 service list: (1.194720921s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-878866 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-878866 service list -o json: (1.192419794s)
functional_test.go:1504: Took "1.192530023s" to run "out/minikube-linux-amd64 -p functional-878866 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-878866
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-878866
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-878866
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E1206 09:52:48.982757  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:00.695425  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (3m10.122378682s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (190.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 kubectl -- rollout status deployment/busybox: (2.261877015s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-79lwz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-fv29j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-rg7x7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-79lwz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-fv29j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-rg7x7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-79lwz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-fv29j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-rg7x7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-79lwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-79lwz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-fv29j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-fv29j -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-rg7x7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 kubectl -- exec busybox-7b57f96db7-rg7x7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (42.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 node add --alsologtostderr -v 5: (42.030649047s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (42.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-641588 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp testdata/cp-test.txt ha-641588:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3284100187/001/cp-test_ha-641588.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588:/home/docker/cp-test.txt ha-641588-m02:/home/docker/cp-test_ha-641588_ha-641588-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test_ha-641588_ha-641588-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588:/home/docker/cp-test.txt ha-641588-m03:/home/docker/cp-test_ha-641588_ha-641588-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test_ha-641588_ha-641588-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588:/home/docker/cp-test.txt ha-641588-m04:/home/docker/cp-test_ha-641588_ha-641588-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test_ha-641588_ha-641588-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp testdata/cp-test.txt ha-641588-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3284100187/001/cp-test_ha-641588-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m02:/home/docker/cp-test.txt ha-641588:/home/docker/cp-test_ha-641588-m02_ha-641588.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test_ha-641588-m02_ha-641588.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m02:/home/docker/cp-test.txt ha-641588-m03:/home/docker/cp-test_ha-641588-m02_ha-641588-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test_ha-641588-m02_ha-641588-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m02:/home/docker/cp-test.txt ha-641588-m04:/home/docker/cp-test_ha-641588-m02_ha-641588-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test_ha-641588-m02_ha-641588-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp testdata/cp-test.txt ha-641588-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3284100187/001/cp-test_ha-641588-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m03:/home/docker/cp-test.txt ha-641588:/home/docker/cp-test_ha-641588-m03_ha-641588.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test_ha-641588-m03_ha-641588.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m03:/home/docker/cp-test.txt ha-641588-m02:/home/docker/cp-test_ha-641588-m03_ha-641588-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test_ha-641588-m03_ha-641588-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m03:/home/docker/cp-test.txt ha-641588-m04:/home/docker/cp-test_ha-641588-m03_ha-641588-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test_ha-641588-m03_ha-641588-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp testdata/cp-test.txt ha-641588-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3284100187/001/cp-test_ha-641588-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m04:/home/docker/cp-test.txt ha-641588:/home/docker/cp-test_ha-641588-m04_ha-641588.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588 "sudo cat /home/docker/cp-test_ha-641588-m04_ha-641588.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m04:/home/docker/cp-test.txt ha-641588-m02:/home/docker/cp-test_ha-641588-m04_ha-641588-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m02 "sudo cat /home/docker/cp-test_ha-641588-m04_ha-641588-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 cp ha-641588-m04:/home/docker/cp-test.txt ha-641588-m03:/home/docker/cp-test_ha-641588-m04_ha-641588-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 ssh -n ha-641588-m03 "sudo cat /home/docker/cp-test_ha-641588-m04_ha-641588-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 node stop m02 --alsologtostderr -v 5
E1206 09:57:15.536782  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:15.543231  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:15.554593  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:15.576011  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:15.617417  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:15.698813  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:15.860314  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:16.182060  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:16.824106  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:18.105683  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:20.668044  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:25.790360  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:36.032441  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:48.983620  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:57:56.514649  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:58:00.695341  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 node stop m02 --alsologtostderr -v 5: (1m22.304374216s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5: exit status 7 (504.801337ms)

                                                
                                                
-- stdout --
	ha-641588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-641588-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-641588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-641588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:58:04.686946  408919 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:58:04.687187  408919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:58:04.687196  408919 out.go:374] Setting ErrFile to fd 2...
	I1206 09:58:04.687200  408919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:58:04.687432  408919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 09:58:04.687645  408919 out.go:368] Setting JSON to false
	I1206 09:58:04.687677  408919 mustload.go:66] Loading cluster: ha-641588
	I1206 09:58:04.687806  408919 notify.go:221] Checking for updates...
	I1206 09:58:04.688250  408919 config.go:182] Loaded profile config "ha-641588": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 09:58:04.688276  408919 status.go:174] checking status of ha-641588 ...
	I1206 09:58:04.690627  408919 status.go:371] ha-641588 host status = "Running" (err=<nil>)
	I1206 09:58:04.690647  408919 host.go:66] Checking if "ha-641588" exists ...
	I1206 09:58:04.693356  408919 main.go:143] libmachine: domain ha-641588 has defined MAC address 52:54:00:f3:cc:88 in network mk-ha-641588
	I1206 09:58:04.693819  408919 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:cc:88", ip: ""} in network mk-ha-641588: {Iface:virbr1 ExpiryTime:2025-12-06 10:52:47 +0000 UTC Type:0 Mac:52:54:00:f3:cc:88 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-641588 Clientid:01:52:54:00:f3:cc:88}
	I1206 09:58:04.693844  408919 main.go:143] libmachine: domain ha-641588 has defined IP address 192.168.39.71 and MAC address 52:54:00:f3:cc:88 in network mk-ha-641588
	I1206 09:58:04.694029  408919 host.go:66] Checking if "ha-641588" exists ...
	I1206 09:58:04.694309  408919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:58:04.696477  408919 main.go:143] libmachine: domain ha-641588 has defined MAC address 52:54:00:f3:cc:88 in network mk-ha-641588
	I1206 09:58:04.696826  408919 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:cc:88", ip: ""} in network mk-ha-641588: {Iface:virbr1 ExpiryTime:2025-12-06 10:52:47 +0000 UTC Type:0 Mac:52:54:00:f3:cc:88 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-641588 Clientid:01:52:54:00:f3:cc:88}
	I1206 09:58:04.696881  408919 main.go:143] libmachine: domain ha-641588 has defined IP address 192.168.39.71 and MAC address 52:54:00:f3:cc:88 in network mk-ha-641588
	I1206 09:58:04.697028  408919 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/ha-641588/id_rsa Username:docker}
	I1206 09:58:04.792028  408919 ssh_runner.go:195] Run: systemctl --version
	I1206 09:58:04.798935  408919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:58:04.816493  408919 kubeconfig.go:125] found "ha-641588" server: "https://192.168.39.254:8443"
	I1206 09:58:04.816524  408919 api_server.go:166] Checking apiserver status ...
	I1206 09:58:04.816554  408919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:58:04.843779  408919 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1481/cgroup
	W1206 09:58:04.855907  408919 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1481/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:58:04.855965  408919 ssh_runner.go:195] Run: ls
	I1206 09:58:04.861099  408919 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1206 09:58:04.865594  408919 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1206 09:58:04.865616  408919 status.go:463] ha-641588 apiserver status = Running (err=<nil>)
	I1206 09:58:04.865629  408919 status.go:176] ha-641588 status: &{Name:ha-641588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:58:04.865652  408919 status.go:174] checking status of ha-641588-m02 ...
	I1206 09:58:04.867429  408919 status.go:371] ha-641588-m02 host status = "Stopped" (err=<nil>)
	I1206 09:58:04.867446  408919 status.go:384] host is not running, skipping remaining checks
	I1206 09:58:04.867453  408919 status.go:176] ha-641588-m02 status: &{Name:ha-641588-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:58:04.867470  408919 status.go:174] checking status of ha-641588-m03 ...
	I1206 09:58:04.868724  408919 status.go:371] ha-641588-m03 host status = "Running" (err=<nil>)
	I1206 09:58:04.868740  408919 host.go:66] Checking if "ha-641588-m03" exists ...
	I1206 09:58:04.871410  408919 main.go:143] libmachine: domain ha-641588-m03 has defined MAC address 52:54:00:15:08:6e in network mk-ha-641588
	I1206 09:58:04.871815  408919 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:08:6e", ip: ""} in network mk-ha-641588: {Iface:virbr1 ExpiryTime:2025-12-06 10:54:42 +0000 UTC Type:0 Mac:52:54:00:15:08:6e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-641588-m03 Clientid:01:52:54:00:15:08:6e}
	I1206 09:58:04.871883  408919 main.go:143] libmachine: domain ha-641588-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:15:08:6e in network mk-ha-641588
	I1206 09:58:04.872044  408919 host.go:66] Checking if "ha-641588-m03" exists ...
	I1206 09:58:04.872233  408919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:58:04.874126  408919 main.go:143] libmachine: domain ha-641588-m03 has defined MAC address 52:54:00:15:08:6e in network mk-ha-641588
	I1206 09:58:04.874468  408919 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:15:08:6e", ip: ""} in network mk-ha-641588: {Iface:virbr1 ExpiryTime:2025-12-06 10:54:42 +0000 UTC Type:0 Mac:52:54:00:15:08:6e Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-641588-m03 Clientid:01:52:54:00:15:08:6e}
	I1206 09:58:04.874489  408919 main.go:143] libmachine: domain ha-641588-m03 has defined IP address 192.168.39.59 and MAC address 52:54:00:15:08:6e in network mk-ha-641588
	I1206 09:58:04.874615  408919 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/ha-641588-m03/id_rsa Username:docker}
	I1206 09:58:04.955635  408919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:58:04.974478  408919 kubeconfig.go:125] found "ha-641588" server: "https://192.168.39.254:8443"
	I1206 09:58:04.974503  408919 api_server.go:166] Checking apiserver status ...
	I1206 09:58:04.974543  408919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:58:04.993404  408919 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1517/cgroup
	W1206 09:58:05.004004  408919 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1517/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:58:05.004060  408919 ssh_runner.go:195] Run: ls
	I1206 09:58:05.008998  408919 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1206 09:58:05.013592  408919 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1206 09:58:05.013618  408919 status.go:463] ha-641588-m03 apiserver status = Running (err=<nil>)
	I1206 09:58:05.013630  408919 status.go:176] ha-641588-m03 status: &{Name:ha-641588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:58:05.013656  408919 status.go:174] checking status of ha-641588-m04 ...
	I1206 09:58:05.015313  408919 status.go:371] ha-641588-m04 host status = "Running" (err=<nil>)
	I1206 09:58:05.015346  408919 host.go:66] Checking if "ha-641588-m04" exists ...
	I1206 09:58:05.018046  408919 main.go:143] libmachine: domain ha-641588-m04 has defined MAC address 52:54:00:43:13:30 in network mk-ha-641588
	I1206 09:58:05.018506  408919 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:43:13:30", ip: ""} in network mk-ha-641588: {Iface:virbr1 ExpiryTime:2025-12-06 10:56:04 +0000 UTC Type:0 Mac:52:54:00:43:13:30 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-641588-m04 Clientid:01:52:54:00:43:13:30}
	I1206 09:58:05.018531  408919 main.go:143] libmachine: domain ha-641588-m04 has defined IP address 192.168.39.236 and MAC address 52:54:00:43:13:30 in network mk-ha-641588
	I1206 09:58:05.018653  408919 host.go:66] Checking if "ha-641588-m04" exists ...
	I1206 09:58:05.018848  408919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:58:05.020825  408919 main.go:143] libmachine: domain ha-641588-m04 has defined MAC address 52:54:00:43:13:30 in network mk-ha-641588
	I1206 09:58:05.021270  408919 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:43:13:30", ip: ""} in network mk-ha-641588: {Iface:virbr1 ExpiryTime:2025-12-06 10:56:04 +0000 UTC Type:0 Mac:52:54:00:43:13:30 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-641588-m04 Clientid:01:52:54:00:43:13:30}
	I1206 09:58:05.021296  408919 main.go:143] libmachine: domain ha-641588-m04 has defined IP address 192.168.39.236 and MAC address 52:54:00:43:13:30 in network mk-ha-641588
	I1206 09:58:05.021416  408919 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/ha-641588-m04/id_rsa Username:docker}
	I1206 09:58:05.107934  408919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:58:05.125226  408919 status.go:176] ha-641588-m04 status: &{Name:ha-641588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (27.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 node start m02 --alsologtostderr -v 5: (26.672901756s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (27.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (387.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 stop --alsologtostderr -v 5
E1206 09:58:37.476203  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:23.760956  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:59.398394  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:02:15.540006  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:02:43.242039  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:02:48.985299  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 stop --alsologtostderr -v 5: (4m18.275489468s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 start --wait true --alsologtostderr -v 5
E1206 10:03:00.695328  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 start --wait true --alsologtostderr -v 5: (2m9.56281238s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (387.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 node delete m03 --alsologtostderr -v 5: (5.911094126s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (255.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 stop --alsologtostderr -v 5
E1206 10:05:52.054147  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:15.540029  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:48.982703  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:08:00.694987  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 stop --alsologtostderr -v 5: (4m15.400529144s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5: exit status 7 (63.045554ms)

                                                
                                                
-- stdout --
	ha-641588
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-641588-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-641588-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:09:24.561485  411943 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:09:24.562023  411943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:09:24.562040  411943 out.go:374] Setting ErrFile to fd 2...
	I1206 10:09:24.562048  411943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:09:24.562463  411943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 10:09:24.562840  411943 out.go:368] Setting JSON to false
	I1206 10:09:24.562884  411943 mustload.go:66] Loading cluster: ha-641588
	I1206 10:09:24.562989  411943 notify.go:221] Checking for updates...
	I1206 10:09:24.563337  411943 config.go:182] Loaded profile config "ha-641588": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:09:24.563355  411943 status.go:174] checking status of ha-641588 ...
	I1206 10:09:24.565357  411943 status.go:371] ha-641588 host status = "Stopped" (err=<nil>)
	I1206 10:09:24.565371  411943 status.go:384] host is not running, skipping remaining checks
	I1206 10:09:24.565377  411943 status.go:176] ha-641588 status: &{Name:ha-641588 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:09:24.565402  411943 status.go:174] checking status of ha-641588-m02 ...
	I1206 10:09:24.566486  411943 status.go:371] ha-641588-m02 host status = "Stopped" (err=<nil>)
	I1206 10:09:24.566498  411943 status.go:384] host is not running, skipping remaining checks
	I1206 10:09:24.566502  411943 status.go:176] ha-641588-m02 status: &{Name:ha-641588-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:09:24.566513  411943 status.go:174] checking status of ha-641588-m04 ...
	I1206 10:09:24.567549  411943 status.go:371] ha-641588-m04 host status = "Stopped" (err=<nil>)
	I1206 10:09:24.567565  411943 status.go:384] host is not running, skipping remaining checks
	I1206 10:09:24.567569  411943 status.go:176] ha-641588-m04 status: &{Name:ha-641588-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (255.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (115.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (1m55.209864321s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (115.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 node add --control-plane --alsologtostderr -v 5
E1206 10:12:15.540179  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-641588 node add --control-plane --alsologtostderr -v 5: (1m10.884421681s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-641588 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-791649 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
E1206 10:12:48.985244  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:13:00.695386  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:13:38.606173  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-791649 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (1m18.721529918s)
--- PASS: TestJSONOutput/start/Command (78.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-791649 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-791649 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-791649 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-791649 --output=json --user=testUser: (6.69772136s)
--- PASS: TestJSONOutput/stop/Command (6.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-175520 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-175520 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.868987ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a40c9d6-a8ee-46f3-87f3-f8bf761b2ea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-175520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"35cf31a4-e938-44f3-83e7-95af42a15e59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"3c3fbbf6-f890-4d26-8a09-09cc4ad289b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c67bec7-c41c-44ec-b62c-9b713c8c6bed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig"}}
	{"specversion":"1.0","id":"7b790fe9-519e-4b9e-a472-82ce327db8c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube"}}
	{"specversion":"1.0","id":"dffaacd6-d3c4-451b-be77-96da066624e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9d00feed-920e-4039-b761-ece37d1dcea8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"abb503f6-bc88-4807-9c48-2875f79c870b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-175520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-175520
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (81.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-756755 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-756755 --driver=kvm2  --container-runtime=containerd: (38.488830612s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-759463 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-759463 --driver=kvm2  --container-runtime=containerd: (40.868099553s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-756755
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-759463
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-759463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-759463
helpers_test.go:175: Cleaning up "first-756755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-756755
--- PASS: TestMinikubeProfile (81.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-930691 --memory=3072 --mount-string /tmp/TestMountStartserial1592602356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-930691 --memory=3072 --mount-string /tmp/TestMountStartserial1592602356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (21.119664972s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-930691 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-930691 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-947089 --memory=3072 --mount-string /tmp/TestMountStartserial1592602356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1206 10:16:03.764051  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-947089 --memory=3072 --mount-string /tmp/TestMountStartserial1592602356/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (23.130011124s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-947089 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-947089 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-930691 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-947089 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-947089 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-947089
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-947089: (1.395970334s)
--- PASS: TestMountStart/serial/Stop (1.40s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-947089
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-947089: (17.295136051s)
--- PASS: TestMountStart/serial/RestartStopped (18.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-947089 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-947089 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463758 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1206 10:17:15.536500  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:17:48.982369  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:00.694905  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463758 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m38.165659844s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-463758 -- rollout status deployment/busybox: (2.146319446s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-hvjqn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-lkkwj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-hvjqn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-lkkwj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-hvjqn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-lkkwj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-hvjqn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-hvjqn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-lkkwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463758 -- exec busybox-7b57f96db7-lkkwj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-463758 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-463758 -v=5 --alsologtostderr: (41.951307117s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.39s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-463758 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp testdata/cp-test.txt multinode-463758:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile918548722/001/cp-test_multinode-463758.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758:/home/docker/cp-test.txt multinode-463758-m02:/home/docker/cp-test_multinode-463758_multinode-463758-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m02 "sudo cat /home/docker/cp-test_multinode-463758_multinode-463758-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758:/home/docker/cp-test.txt multinode-463758-m03:/home/docker/cp-test_multinode-463758_multinode-463758-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m03 "sudo cat /home/docker/cp-test_multinode-463758_multinode-463758-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp testdata/cp-test.txt multinode-463758-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile918548722/001/cp-test_multinode-463758-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758-m02:/home/docker/cp-test.txt multinode-463758:/home/docker/cp-test_multinode-463758-m02_multinode-463758.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758 "sudo cat /home/docker/cp-test_multinode-463758-m02_multinode-463758.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758-m02:/home/docker/cp-test.txt multinode-463758-m03:/home/docker/cp-test_multinode-463758-m02_multinode-463758-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m03 "sudo cat /home/docker/cp-test_multinode-463758-m02_multinode-463758-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp testdata/cp-test.txt multinode-463758-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile918548722/001/cp-test_multinode-463758-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758-m03:/home/docker/cp-test.txt multinode-463758:/home/docker/cp-test_multinode-463758-m03_multinode-463758.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758 "sudo cat /home/docker/cp-test_multinode-463758-m03_multinode-463758.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 cp multinode-463758-m03:/home/docker/cp-test.txt multinode-463758-m02:/home/docker/cp-test_multinode-463758-m03_multinode-463758-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 ssh -n multinode-463758-m02 "sudo cat /home/docker/cp-test_multinode-463758-m03_multinode-463758-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-463758 node stop m03: (1.391918813s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463758 status: exit status 7 (319.734788ms)

                                                
                                                
-- stdout --
	multinode-463758
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-463758-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-463758-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr: exit status 7 (326.829864ms)

                                                
                                                
-- stdout --
	multinode-463758
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-463758-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-463758-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:19:07.783302  417882 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:19:07.783515  417882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:19:07.783524  417882 out.go:374] Setting ErrFile to fd 2...
	I1206 10:19:07.783529  417882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:19:07.783734  417882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 10:19:07.783919  417882 out.go:368] Setting JSON to false
	I1206 10:19:07.783944  417882 mustload.go:66] Loading cluster: multinode-463758
	I1206 10:19:07.784038  417882 notify.go:221] Checking for updates...
	I1206 10:19:07.784772  417882 config.go:182] Loaded profile config "multinode-463758": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:19:07.784808  417882 status.go:174] checking status of multinode-463758 ...
	I1206 10:19:07.787554  417882 status.go:371] multinode-463758 host status = "Running" (err=<nil>)
	I1206 10:19:07.787579  417882 host.go:66] Checking if "multinode-463758" exists ...
	I1206 10:19:07.790260  417882 main.go:143] libmachine: domain multinode-463758 has defined MAC address 52:54:00:21:00:8b in network mk-multinode-463758
	I1206 10:19:07.790699  417882 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:00:8b", ip: ""} in network mk-multinode-463758: {Iface:virbr1 ExpiryTime:2025-12-06 11:16:49 +0000 UTC Type:0 Mac:52:54:00:21:00:8b Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-463758 Clientid:01:52:54:00:21:00:8b}
	I1206 10:19:07.790728  417882 main.go:143] libmachine: domain multinode-463758 has defined IP address 192.168.39.212 and MAC address 52:54:00:21:00:8b in network mk-multinode-463758
	I1206 10:19:07.790903  417882 host.go:66] Checking if "multinode-463758" exists ...
	I1206 10:19:07.791139  417882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:19:07.793399  417882 main.go:143] libmachine: domain multinode-463758 has defined MAC address 52:54:00:21:00:8b in network mk-multinode-463758
	I1206 10:19:07.793747  417882 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:00:8b", ip: ""} in network mk-multinode-463758: {Iface:virbr1 ExpiryTime:2025-12-06 11:16:49 +0000 UTC Type:0 Mac:52:54:00:21:00:8b Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-463758 Clientid:01:52:54:00:21:00:8b}
	I1206 10:19:07.793767  417882 main.go:143] libmachine: domain multinode-463758 has defined IP address 192.168.39.212 and MAC address 52:54:00:21:00:8b in network mk-multinode-463758
	I1206 10:19:07.793897  417882 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/multinode-463758/id_rsa Username:docker}
	I1206 10:19:07.875013  417882 ssh_runner.go:195] Run: systemctl --version
	I1206 10:19:07.880963  417882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:19:07.899347  417882 kubeconfig.go:125] found "multinode-463758" server: "https://192.168.39.212:8443"
	I1206 10:19:07.899375  417882 api_server.go:166] Checking apiserver status ...
	I1206 10:19:07.899406  417882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:19:07.917537  417882 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	W1206 10:19:07.928123  417882 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:19:07.928174  417882 ssh_runner.go:195] Run: ls
	I1206 10:19:07.933128  417882 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1206 10:19:07.938847  417882 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1206 10:19:07.938878  417882 status.go:463] multinode-463758 apiserver status = Running (err=<nil>)
	I1206 10:19:07.938890  417882 status.go:176] multinode-463758 status: &{Name:multinode-463758 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:19:07.938915  417882 status.go:174] checking status of multinode-463758-m02 ...
	I1206 10:19:07.940415  417882 status.go:371] multinode-463758-m02 host status = "Running" (err=<nil>)
	I1206 10:19:07.940432  417882 host.go:66] Checking if "multinode-463758-m02" exists ...
	I1206 10:19:07.942767  417882 main.go:143] libmachine: domain multinode-463758-m02 has defined MAC address 52:54:00:68:c7:3f in network mk-multinode-463758
	I1206 10:19:07.943155  417882 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:68:c7:3f", ip: ""} in network mk-multinode-463758: {Iface:virbr1 ExpiryTime:2025-12-06 11:17:44 +0000 UTC Type:0 Mac:52:54:00:68:c7:3f Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-463758-m02 Clientid:01:52:54:00:68:c7:3f}
	I1206 10:19:07.943183  417882 main.go:143] libmachine: domain multinode-463758-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:68:c7:3f in network mk-multinode-463758
	I1206 10:19:07.943314  417882 host.go:66] Checking if "multinode-463758-m02" exists ...
	I1206 10:19:07.943548  417882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:19:07.945494  417882 main.go:143] libmachine: domain multinode-463758-m02 has defined MAC address 52:54:00:68:c7:3f in network mk-multinode-463758
	I1206 10:19:07.945981  417882 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:68:c7:3f", ip: ""} in network mk-multinode-463758: {Iface:virbr1 ExpiryTime:2025-12-06 11:17:44 +0000 UTC Type:0 Mac:52:54:00:68:c7:3f Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-463758-m02 Clientid:01:52:54:00:68:c7:3f}
	I1206 10:19:07.946011  417882 main.go:143] libmachine: domain multinode-463758-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:68:c7:3f in network mk-multinode-463758
	I1206 10:19:07.946135  417882 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-383742/.minikube/machines/multinode-463758-m02/id_rsa Username:docker}
	I1206 10:19:08.031503  417882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:19:08.048621  417882 status.go:176] multinode-463758-m02 status: &{Name:multinode-463758-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:19:08.048651  417882 status.go:174] checking status of multinode-463758-m03 ...
	I1206 10:19:08.050346  417882 status.go:371] multinode-463758-m03 host status = "Stopped" (err=<nil>)
	I1206 10:19:08.050362  417882 status.go:384] host is not running, skipping remaining checks
	I1206 10:19:08.050367  417882 status.go:176] multinode-463758-m03 status: &{Name:multinode-463758-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-463758 node start m03 -v=5 --alsologtostderr: (35.964918051s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (295.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-463758
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-463758
E1206 10:22:15.539884  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-463758: (2m35.113812275s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463758 --wait=true -v=5 --alsologtostderr
E1206 10:22:32.056619  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:22:48.983621  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:23:00.695501  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463758 --wait=true -v=5 --alsologtostderr: (2m20.741969232s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-463758
--- PASS: TestMultiNode/serial/RestartKeepsNodes (295.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-463758 node delete m03: (1.551128859s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (170.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 stop
E1206 10:27:15.540354  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-463758 stop: (2m50.18197437s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463758 status: exit status 7 (62.787053ms)

                                                
                                                
-- stdout --
	multinode-463758
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-463758-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr: exit status 7 (63.51511ms)

                                                
                                                
-- stdout --
	multinode-463758
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-463758-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:27:32.788255  420166 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:27:32.788702  420166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:27:32.788711  420166 out.go:374] Setting ErrFile to fd 2...
	I1206 10:27:32.788715  420166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:27:32.788941  420166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 10:27:32.789109  420166 out.go:368] Setting JSON to false
	I1206 10:27:32.789134  420166 mustload.go:66] Loading cluster: multinode-463758
	I1206 10:27:32.789253  420166 notify.go:221] Checking for updates...
	I1206 10:27:32.789517  420166 config.go:182] Loaded profile config "multinode-463758": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:27:32.789533  420166 status.go:174] checking status of multinode-463758 ...
	I1206 10:27:32.791853  420166 status.go:371] multinode-463758 host status = "Stopped" (err=<nil>)
	I1206 10:27:32.791886  420166 status.go:384] host is not running, skipping remaining checks
	I1206 10:27:32.791893  420166 status.go:176] multinode-463758 status: &{Name:multinode-463758 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:27:32.791911  420166 status.go:174] checking status of multinode-463758-m02 ...
	I1206 10:27:32.793085  420166 status.go:371] multinode-463758-m02 host status = "Stopped" (err=<nil>)
	I1206 10:27:32.793099  420166 status.go:384] host is not running, skipping remaining checks
	I1206 10:27:32.793104  420166 status.go:176] multinode-463758-m02 status: &{Name:multinode-463758-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (170.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463758 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1206 10:27:48.982711  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:28:00.694686  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463758 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m16.302773574s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463758 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-463758
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463758-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-463758-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (73.114518ms)

                                                
                                                
-- stdout --
	* [multinode-463758-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-463758-m02' is duplicated with machine name 'multinode-463758-m02' in profile 'multinode-463758'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463758-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463758-m03 --driver=kvm2  --container-runtime=containerd: (38.655104583s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-463758
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-463758: exit status 80 (195.352672ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-463758 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-463758-m03 already exists in multinode-463758-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-463758-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.76s)

                                                
                                    
x
+
TestPreload (140.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-394036 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd
E1206 10:30:18.608282  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-394036 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd: (1m29.62746766s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-394036 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-394036
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-394036: (6.654272615s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-394036 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-394036 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (41.976874375s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-394036 image list
helpers_test.go:175: Cleaning up "test-preload-394036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-394036
--- PASS: TestPreload (140.20s)

                                                
                                    
x
+
TestScheduledStopUnix (107.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-299652 --memory=3072 --driver=kvm2  --container-runtime=containerd
E1206 10:32:15.539958  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-299652 --memory=3072 --driver=kvm2  --container-runtime=containerd: (36.237576521s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-299652 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:32:27.251538  422322 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:32:27.251662  422322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:32:27.251672  422322 out.go:374] Setting ErrFile to fd 2...
	I1206 10:32:27.251678  422322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:32:27.251934  422322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 10:32:27.252202  422322 out.go:368] Setting JSON to false
	I1206 10:32:27.252298  422322 mustload.go:66] Loading cluster: scheduled-stop-299652
	I1206 10:32:27.252632  422322 config.go:182] Loaded profile config "scheduled-stop-299652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:32:27.252718  422322 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/config.json ...
	I1206 10:32:27.252933  422322 mustload.go:66] Loading cluster: scheduled-stop-299652
	I1206 10:32:27.253091  422322 config.go:182] Loaded profile config "scheduled-stop-299652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-299652 -n scheduled-stop-299652
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-299652 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:32:27.533724  422368 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:32:27.533983  422368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:32:27.533993  422368 out.go:374] Setting ErrFile to fd 2...
	I1206 10:32:27.533999  422368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:32:27.534222  422368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 10:32:27.534463  422368 out.go:368] Setting JSON to false
	I1206 10:32:27.534672  422368 daemonize_unix.go:73] killing process 422357 as it is an old scheduled stop
	I1206 10:32:27.534777  422368 mustload.go:66] Loading cluster: scheduled-stop-299652
	I1206 10:32:27.535165  422368 config.go:182] Loaded profile config "scheduled-stop-299652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:32:27.535265  422368 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/config.json ...
	I1206 10:32:27.535446  422368 mustload.go:66] Loading cluster: scheduled-stop-299652
	I1206 10:32:27.535574  422368 config.go:182] Loaded profile config "scheduled-stop-299652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 10:32:27.541259  387687 retry.go:31] will retry after 122.484µs: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.542410  387687 retry.go:31] will retry after 194.178µs: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.543567  387687 retry.go:31] will retry after 303.036µs: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.544729  387687 retry.go:31] will retry after 409.637µs: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.545832  387687 retry.go:31] will retry after 549.321µs: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.546982  387687 retry.go:31] will retry after 491.179µs: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.548102  387687 retry.go:31] will retry after 1.165529ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.550336  387687 retry.go:31] will retry after 2.533487ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.553546  387687 retry.go:31] will retry after 3.749763ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.557789  387687 retry.go:31] will retry after 2.620994ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.561014  387687 retry.go:31] will retry after 3.303198ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.565225  387687 retry.go:31] will retry after 6.743694ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.572453  387687 retry.go:31] will retry after 13.297595ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.586706  387687 retry.go:31] will retry after 18.350803ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.605955  387687 retry.go:31] will retry after 38.053094ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
I1206 10:32:27.644162  387687 retry.go:31] will retry after 33.618205ms: open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-299652 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1206 10:32:43.767645  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:32:48.985531  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-299652 -n scheduled-stop-299652
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-299652
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-299652 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:32:53.239509  422526 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:32:53.239789  422526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:32:53.239809  422526 out.go:374] Setting ErrFile to fd 2...
	I1206 10:32:53.239813  422526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:32:53.240018  422526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 10:32:53.240316  422526 out.go:368] Setting JSON to false
	I1206 10:32:53.240392  422526 mustload.go:66] Loading cluster: scheduled-stop-299652
	I1206 10:32:53.240699  422526 config.go:182] Loaded profile config "scheduled-stop-299652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1206 10:32:53.240789  422526 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/scheduled-stop-299652/config.json ...
	I1206 10:32:53.240990  422526 mustload.go:66] Loading cluster: scheduled-stop-299652
	I1206 10:32:53.241090  422526 config.go:182] Loaded profile config "scheduled-stop-299652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1206 10:33:00.695851  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-299652
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-299652: exit status 7 (60.697463ms)

                                                
                                                
-- stdout --
	scheduled-stop-299652
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-299652 -n scheduled-stop-299652
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-299652 -n scheduled-stop-299652: exit status 7 (57.99281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-299652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-299652
--- PASS: TestScheduledStopUnix (107.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (147.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1054228315 start -p running-upgrade-777099 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1054228315 start -p running-upgrade-777099 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (1m39.221574276s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-777099 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-777099 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (46.474898613s)
helpers_test.go:175: Cleaning up "running-upgrade-777099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-777099
--- PASS: TestRunningBinaryUpgrade (147.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (178.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m0.297761353s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-926065
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-926065: (1.612777455s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-926065 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-926065 status --format={{.Host}}: exit status 7 (67.242274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.674244831s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-926065 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (86.101884ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-926065] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-926065
	    minikube start -p kubernetes-upgrade-926065 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9260652 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-926065 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-926065 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (55.264687902s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-926065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-926065
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-926065: (1.161736814s)
--- PASS: TestKubernetesUpgrade (178.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674937 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-674937 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd: exit status 14 (96.278479ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-674937] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (81.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674937 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674937 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m21.184337085s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-674937 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (81.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674937 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674937 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (26.958491024s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-674937 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-674937 status -o json: exit status 2 (240.463039ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-674937","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-674937
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-134334 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-134334 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (124.405852ms)

                                                
                                                
-- stdout --
	* [false-134334] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:35:07.017007  424612 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:35:07.017340  424612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:07.017357  424612 out.go:374] Setting ErrFile to fd 2...
	I1206 10:35:07.017365  424612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:07.017698  424612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-383742/.minikube/bin
	I1206 10:35:07.018461  424612 out.go:368] Setting JSON to false
	I1206 10:35:07.019943  424612 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11857,"bootTime":1765005450,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 10:35:07.020021  424612 start.go:143] virtualization: kvm guest
	I1206 10:35:07.022044  424612 out.go:179] * [false-134334] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 10:35:07.023296  424612 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:35:07.023281  424612 notify.go:221] Checking for updates...
	I1206 10:35:07.025737  424612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:35:07.027006  424612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-383742/kubeconfig
	I1206 10:35:07.028578  424612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-383742/.minikube
	I1206 10:35:07.029729  424612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 10:35:07.031003  424612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:35:07.032511  424612 config.go:182] Loaded profile config "NoKubernetes-674937": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1206 10:35:07.032618  424612 config.go:182] Loaded profile config "kubernetes-upgrade-926065": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1206 10:35:07.032688  424612 config.go:182] Loaded profile config "running-upgrade-777099": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1206 10:35:07.032773  424612 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:35:07.069528  424612 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 10:35:07.070651  424612 start.go:309] selected driver: kvm2
	I1206 10:35:07.070670  424612 start.go:927] validating driver "kvm2" against <nil>
	I1206 10:35:07.070685  424612 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:35:07.072838  424612 out.go:203] 
	W1206 10:35:07.073969  424612 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1206 10:35:07.074980  424612 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-134334 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-134334" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:34:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.224:8443
name: NoKubernetes-674937
contexts:
- context:
cluster: NoKubernetes-674937
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:34:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-674937
name: NoKubernetes-674937
current-context: NoKubernetes-674937
kind: Config
users:
- name: NoKubernetes-674937
user:
client-certificate: /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/NoKubernetes-674937/client.crt
client-key: /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/NoKubernetes-674937/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-134334

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-134334"

                                                
                                                
----------------------- debugLogs end: false-134334 [took: 3.931644553s] --------------------------------
helpers_test.go:175: Cleaning up "false-134334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-134334
--- PASS: TestNetworkPlugins/group/false (4.28s)

                                                
                                    
x
+
TestISOImage/Setup (24.73s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-929873 --no-kubernetes --driver=kvm2  --container-runtime=containerd
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-929873 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (24.727554284s)
--- PASS: TestISOImage/Setup (24.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674937 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674937 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (33.261968967s)
--- PASS: TestNoKubernetes/serial/Start (33.26s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22047-383742/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-674937 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-674937 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.046947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (1.245237005s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-674937
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-674937: (1.488524553s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (34.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674937 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674937 --driver=kvm2  --container-runtime=containerd: (34.711113719s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (34.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-674937 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-674937 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.095401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (150.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3648185103 start -p stopped-upgrade-848072 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
E1206 10:37:15.540062  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3648185103 start -p stopped-upgrade-848072 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (1m37.878804394s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3648185103 -p stopped-upgrade-848072 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3648185103 -p stopped-upgrade-848072 stop: (1.45707782s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-848072 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1206 10:39:12.058618  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-848072 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (51.40147582s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (150.74s)

                                                
                                    
x
+
TestPause/serial/Start (123.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-351740 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-351740 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m3.522844452s)
--- PASS: TestPause/serial/Start (123.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m46.582147862s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-848072
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-848072: (1.398304933s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (58.984476151s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-351740 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-351740 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (53.399679512s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-134334 "pgrep -a kubelet"
I1206 10:39:49.539051  387687 config.go:182] Loaded profile config "auto-134334": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-134334 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5tb9s" [d21835bf-da19-447c-98e4-146491276628] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5tb9s" [d21835bf-da19-447c-98e4-146491276628] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004852012s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-134334 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m13.768202831s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8bjbv" [d8d55639-0dc9-4e11-aed8-9a01278529da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00589868s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-351740 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-351740 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-351740 --output=json --layout=cluster: exit status 2 (238.427329ms)

                                                
                                                
-- stdout --
	{"Name":"pause-351740","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-351740","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-351740 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-351740 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-134334 "pgrep -a kubelet"
I1206 10:40:21.504253  387687 config.go:182] Loaded profile config "kindnet-134334": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-134334 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k5dd6" [3f2e5fd7-15c8-4e5d-8464-abb1e11d361c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k5dd6" [3f2e5fd7-15c8-4e5d-8464-abb1e11d361c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006426086s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-351740 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m21.269230931s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-134334 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m6.184760545s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-bjvpt" [bc497b7b-c32e-4489-919c-017858ad0044] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-bjvpt" [bc497b7b-c32e-4489-919c-017858ad0044] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006945348s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-134334 "pgrep -a kubelet"
I1206 10:41:32.365616  387687 config.go:182] Loaded profile config "calico-134334": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-134334 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p6s52" [c3acba44-bc3d-4f52-8b25-c57d9e97a6c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p6s52" [c3acba44-bc3d-4f52-8b25-c57d9e97a6c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006026461s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m10.961129881s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-134334 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-134334 "pgrep -a kubelet"
I1206 10:41:44.975737  387687 config.go:182] Loaded profile config "custom-flannel-134334": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-134334 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bdnm7" [253a85da-7468-40f2-a76e-00a4fa78f100] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bdnm7" [253a85da-7468-40f2-a76e-00a4fa78f100] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.006427322s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-134334 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-134334 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I1206 10:41:54.496209  387687 config.go:182] Loaded profile config "enable-default-cni-134334": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-134334 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s6v9z" [1c5532e0-9d4d-441c-8547-2d9f6df39ae3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s6v9z" [1c5532e0-9d4d-441c-8547-2d9f6df39ae3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005566804s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-134334 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m25.519256371s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-134334 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (104.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-530698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E1206 10:42:15.536641  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-530698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m44.885739167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (104.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-252888 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-252888 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (1m53.817948556s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fxwmc" [538754ea-53b5-48c3-8f5c-6cea22b3df0d] Running
E1206 10:42:48.982946  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006310998s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-134334 "pgrep -a kubelet"
I1206 10:42:51.177699  387687 config.go:182] Loaded profile config "flannel-134334": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-134334 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q2xv7" [83a60338-cf98-410d-9edf-f1dd9c6a61f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q2xv7" [83a60338-cf98-410d-9edf-f1dd9c6a61f1] Running
E1206 10:43:00.694693  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-715379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004814576s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-134334 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-981506 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-981506 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m27.31578905s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-134334 "pgrep -a kubelet"
I1206 10:43:24.705362  387687 config.go:182] Loaded profile config "bridge-134334": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-134334 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2f2lr" [c44b33f5-3645-405f-bc15-a0aeec6a8037] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2f2lr" [c44b33f5-3645-405f-bc15-a0aeec6a8037] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005105555s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-134334 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-134334 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-291225 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-291225 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m20.314394504s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-530698 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4e5c2c2c-68bf-499f-b40f-394fd4b6617c] Pending
helpers_test.go:352: "busybox" [4e5c2c2c-68bf-499f-b40f-394fd4b6617c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4e5c2c2c-68bf-499f-b40f-394fd4b6617c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003644419s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-530698 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-530698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-530698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028092692s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-530698 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (82.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-530698 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-530698 --alsologtostderr -v=3: (1m22.343910371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (82.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-252888 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3b26193a-90d6-482a-a855-fd66adce7c1b] Pending
helpers_test.go:352: "busybox" [3b26193a-90d6-482a-a855-fd66adce7c1b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3b26193a-90d6-482a-a855-fd66adce7c1b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004750941s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-252888 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-252888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-252888 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (71.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-252888 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-252888 --alsologtostderr -v=3: (1m11.567372324s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (71.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-981506 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7ada9cc3-c64e-4824-a795-02bd300d9b59] Pending
helpers_test.go:352: "busybox" [7ada9cc3-c64e-4824-a795-02bd300d9b59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7ada9cc3-c64e-4824-a795-02bd300d9b59] Running
E1206 10:44:49.765194  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:49.771573  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:49.782917  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:49.804235  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:49.845574  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:49.926990  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:50.088764  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:50.410721  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:44:51.052554  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004145211s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-981506 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-981506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1206 10:44:52.334271  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-981506 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (77.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-981506 --alsologtostderr -v=3
E1206 10:44:54.896198  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:00.018146  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:10.259703  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-981506 --alsologtostderr -v=3: (1m17.809784327s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (77.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-291225 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [683edd6b-c706-4496-82a1-1d808a308e02] Pending
helpers_test.go:352: "busybox" [683edd6b-c706-4496-82a1-1d808a308e02] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [683edd6b-c706-4496-82a1-1d808a308e02] Running
E1206 10:45:15.313380  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:15.319765  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:15.331149  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:15.352481  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:15.394215  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:15.475626  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:15.637322  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:15.958980  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:16.601161  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:17.883336  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004157206s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-291225 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-291225 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-291225 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-291225 --alsologtostderr -v=3
E1206 10:45:20.444714  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:25.566221  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:30.741725  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-291225 --alsologtostderr -v=3: (1m27.500592049s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-530698 -n old-k8s-version-530698
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-530698 -n old-k8s-version-530698: exit status 7 (60.29322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-530698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (38.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-530698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-530698 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (38.149823201s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-530698 -n old-k8s-version-530698
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (38.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-252888 -n no-preload-252888
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-252888 -n no-preload-252888: exit status 7 (70.071901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-252888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-252888 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 10:45:35.808028  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:45:56.290001  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-252888 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (54.545606024s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-252888 -n no-preload-252888
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wbsk9" [2021c776-d186-4e60-a3a5-c11be9065af0] Pending
E1206 10:46:11.703057  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wbsk9" [2021c776-d186-4e60-a3a5-c11be9065af0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wbsk9" [2021c776-d186-4e60-a3a5-c11be9065af0] Running
E1206 10:46:26.131596  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:26.138000  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:26.149368  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:26.170745  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:26.212209  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:26.293775  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:26.455728  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:26.777493  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.004725991s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-981506 -n embed-certs-981506
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-981506 -n embed-certs-981506: exit status 7 (71.640636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-981506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-981506 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-981506 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (45.234561739s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-981506 -n embed-certs-981506
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wbsk9" [2021c776-d186-4e60-a3a5-c11be9065af0] Running
E1206 10:46:27.419514  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:28.701097  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005388786s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-530698 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-v6g9f" [7cff98e3-3972-4bf3-8a80-0babac293c7f] Running
E1206 10:46:31.262714  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005322162s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-530698 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-530698 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-530698 -n old-k8s-version-530698
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-530698 -n old-k8s-version-530698: exit status 2 (231.905969ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-530698 -n old-k8s-version-530698
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-530698 -n old-k8s-version-530698: exit status 2 (231.425304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-530698 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-530698 -n old-k8s-version-530698
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-530698 -n old-k8s-version-530698
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-v6g9f" [7cff98e3-3972-4bf3-8a80-0babac293c7f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006718108s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-252888 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-856216 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 10:46:37.252263  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/kindnet-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-856216 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (43.330786943s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-252888 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-252888 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-252888 -n no-preload-252888
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-252888 -n no-preload-252888: exit status 2 (251.490706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-252888 -n no-preload-252888
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-252888 -n no-preload-252888: exit status 2 (250.290263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-252888 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-252888 -n no-preload-252888
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-252888 -n no-preload-252888
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.05s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
E1206 10:46:46.627171  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
E1206 10:46:46.559994  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/custom-flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225: exit status 7 (67.542603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-291225 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-291225 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-291225 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.2: (54.497769728s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.76s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.2s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   kicbase_version: v0.0.48-1764169655-21974
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e
iso_test.go:118:   iso_version: v1.37.0-1764843329-22032
--- PASS: TestISOImage/VersionJSON (0.20s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-929873 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1206 10:46:50.403322  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/custom-flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:54.775545  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:54.781924  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:54.793225  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:54.814614  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:54.856377  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:54.938057  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:55.099828  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:55.421737  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:55.525251  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/custom-flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:56.063642  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-swdc5" [f42233fe-276a-4d97-a804-4bff22b30d14] Running
E1206 10:46:57.345578  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:58.609775  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/functional-878866/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:59.907280  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004906513s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-swdc5" [f42233fe-276a-4d97-a804-4bff22b30d14] Running
E1206 10:47:05.029368  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:05.766963  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/custom-flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:07.109432  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004754886s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-981506 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-981506 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-981506 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-981506 -n embed-certs-981506
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-981506 -n embed-certs-981506: exit status 2 (255.838516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-981506 -n embed-certs-981506
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-981506 -n embed-certs-981506: exit status 2 (244.015324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-981506 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-981506 -n embed-certs-981506
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-981506 -n embed-certs-981506
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-856216 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-856216 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-856216 --alsologtostderr -v=3: (2.525976002s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-856216 -n newest-cni-856216
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-856216 -n newest-cni-856216: exit status 7 (63.632507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-856216 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-856216 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 10:47:26.249123  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/custom-flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:33.624689  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/auto-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:35.752632  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/enable-default-cni-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-856216 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (31.815997614s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-856216 -n newest-cni-856216
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sfv6h" [c771fe53-9d6a-4f26-a869-33df7ead0ff6] Running
E1206 10:47:44.978374  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:44.985374  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:44.997461  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:45.018832  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:45.060300  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:45.142265  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:45.304218  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:45.626487  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:46.268449  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:47.549885  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00594848s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sfv6h" [c771fe53-9d6a-4f26-a869-33df7ead0ff6] Running
E1206 10:47:48.071270  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/calico-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:48.982887  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/addons-269722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:47:50.111880  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004863386s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-291225 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-291225 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-291225 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225: exit status 2 (222.420972ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225: exit status 2 (238.08101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-291225 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225
E1206 10:47:55.234033  387687 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/flannel-134334/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-291225 -n default-k8s-diff-port-291225
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-856216 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-856216 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-856216 -n newest-cni-856216
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-856216 -n newest-cni-856216: exit status 2 (219.452146ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-856216 -n newest-cni-856216
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-856216 -n newest-cni-856216: exit status 2 (205.679784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-856216 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-856216 -n newest-cni-856216
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-856216 -n newest-cni-856216
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                    

Test skip (51/437)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
152 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
154 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
157 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
158 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
159 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
361 TestNetworkPlugins/group/kubenet 5.21
369 TestNetworkPlugins/group/cilium 4.18
397 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-134334 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-134334" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:34:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.224:8443
name: NoKubernetes-674937
contexts:
- context:
cluster: NoKubernetes-674937
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:34:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-674937
name: NoKubernetes-674937
current-context: NoKubernetes-674937
kind: Config
users:
- name: NoKubernetes-674937
user:
client-certificate: /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/NoKubernetes-674937/client.crt
client-key: /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/NoKubernetes-674937/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-134334

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-134334"

                                                
                                                
----------------------- debugLogs end: kubenet-134334 [took: 5.031733241s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-134334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-134334
--- SKIP: TestNetworkPlugins/group/kubenet (5.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-134334 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-134334" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-383742/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:34:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.224:8443
name: NoKubernetes-674937
contexts:
- context:
cluster: NoKubernetes-674937
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:34:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-674937
name: NoKubernetes-674937
current-context: NoKubernetes-674937
kind: Config
users:
- name: NoKubernetes-674937
user:
client-certificate: /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/NoKubernetes-674937/client.crt
client-key: /home/jenkins/minikube-integration/22047-383742/.minikube/profiles/NoKubernetes-674937/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-134334

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-134334" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134334"

                                                
                                                
----------------------- debugLogs end: cilium-134334 [took: 4.000742461s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-134334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-134334
--- SKIP: TestNetworkPlugins/group/cilium (4.18s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-658243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-658243
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard