Test Report: none_Linux 20598

                    
                      63c1754226199ce281e4ac8e931674d5ef457043:2025-04-07:39038
                    
                

Test fail (2/170)

Order failed test Duration
29 TestAddons/serial/Volcano 372.74
40 TestAddons/parallel/CSI 388.72
x
+
TestAddons/serial/Volcano (372.74s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 8.793155ms
addons_test.go:815: volcano-admission stabilized in 8.856229ms
addons_test.go:807: volcano-scheduler stabilized in 8.898522ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-kkrdq" [eca17150-2673-4431-a0cc-079a7c574525] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
addons_test.go:829: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:829: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:829: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-04-07 12:53:54.220048742 +0000 UTC m=+542.002069108
addons_test.go:829: (dbg) Run:  kubectl --context minikube describe po volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system
addons_test.go:829: (dbg) kubectl --context minikube describe po volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system:
Name:                 volcano-scheduler-75fdd99bcf-kkrdq
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 ubuntu-20-agent/10.132.0.4
Start Time:           Mon, 07 Apr 2025 12:46:23 +0000
Labels:               app=volcano-scheduler
pod-template-hash=75fdd99bcf
Annotations:          <none>
Status:               Pending
IP:                   10.244.0.19
IPs:
IP:           10.244.0.19
Controlled By:  ReplicaSet/volcano-scheduler-75fdd99bcf
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mbqtk (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:          HostPath (bare host directory volume)
Path:          /tmp/klog-socks
HostPathType:  
kube-api-access-mbqtk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  7m31s                  default-scheduler  Successfully assigned volcano-system/volcano-scheduler-75fdd99bcf-kkrdq to ubuntu-20-agent
Normal   Pulling    4m (x5 over 7m30s)     kubelet            Pulling image "docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
Warning  Failed     3m59s (x5 over 6m55s)  kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m59s (x5 over 6m55s)  kubelet            Error: ErrImagePull
Warning  Failed     115s (x20 over 6m55s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    100s (x21 over 6m55s)  kubelet            Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
addons_test.go:829: (dbg) Run:  kubectl --context minikube logs volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system
addons_test.go:829: (dbg) Non-zero exit: kubectl --context minikube logs volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system: exit status 1 (78.987491ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-75fdd99bcf-kkrdq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:829: kubectl --context minikube logs volcano-scheduler-75fdd99bcf-kkrdq -n volcano-system: exit status 1
addons_test.go:830: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:44 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| start   | --download-only -p             | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC |                     |
	|         | minikube --alsologtostderr     |          |         |         |                     |                     |
	|         | --binary-mirror                |          |         |         |                     |                     |
	|         | http://127.0.0.1:38191         |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| start   | -p minikube --alsologtostderr  | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	|         | -v=1 --memory=2048             |          |         |         |                     |                     |
	|         | --wait=true --driver=none      |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:46 UTC |
	| addons  | enable dashboard -p minikube   | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC |                     |
	| addons  | disable dashboard -p minikube  | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC |                     |
	| start   | -p minikube --wait=true        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC | 07 Apr 25 12:47 UTC |
	|         | --memory=4000                  |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --addons=registry              |          |         |         |                     |                     |
	|         | --addons=metrics-server        |          |         |         |                     |                     |
	|         | --addons=volumesnapshots       |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |          |         |         |                     |                     |
	|         | --addons=gcp-auth              |          |         |         |                     |                     |
	|         | --addons=cloud-spanner         |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin  |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano |          |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:46:01
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:46:01.231062 1429316 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:46:01.231195 1429316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:46:01.231206 1429316 out.go:358] Setting ErrFile to fd 2...
	I0407 12:46:01.231210 1429316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:46:01.231464 1429316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1418173/.minikube/bin
	I0407 12:46:01.232140 1429316 out.go:352] Setting JSON to false
	I0407 12:46:01.233179 1429316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":16105,"bootTime":1744013856,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:46:01.233311 1429316 start.go:139] virtualization: kvm guest
	I0407 12:46:01.235474 1429316 out.go:177] * minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:46:01.236694 1429316 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:46:01.236729 1429316 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:46:01.236731 1429316 notify.go:220] Checking for updates...
	I0407 12:46:01.239515 1429316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:46:01.240993 1429316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 12:46:01.242159 1429316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	I0407 12:46:01.243419 1429316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:46:01.244910 1429316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:46:01.246416 1429316 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:46:01.257114 1429316 out.go:177] * Using the none driver based on user configuration
	I0407 12:46:01.258434 1429316 start.go:297] selected driver: none
	I0407 12:46:01.258453 1429316 start.go:901] validating driver "none" against <nil>
	I0407 12:46:01.258480 1429316 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:46:01.258516 1429316 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0407 12:46:01.258825 1429316 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0407 12:46:01.259483 1429316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:46:01.259773 1429316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:46:01.259810 1429316 cni.go:84] Creating CNI manager for ""
	I0407 12:46:01.259875 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:46:01.259906 1429316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:46:01.259962 1429316 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:46:01.261473 1429316 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0407 12:46:01.262965 1429316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json ...
	I0407 12:46:01.263009 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json: {Name:mk7435778f484db7c9644d73cb119c70d439299f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:01.263157 1429316 start.go:360] acquireMachinesLock for minikube: {Name:mk53793948be750dfc684af85278e6856b44afc9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 12:46:01.263242 1429316 start.go:364] duration metric: took 28.329µs to acquireMachinesLock for "minikube"
	I0407 12:46:01.263265 1429316 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:46:01.263340 1429316 start.go:125] createHost starting for "" (driver="none")
	I0407 12:46:01.265117 1429316 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0407 12:46:01.267404 1429316 exec_runner.go:51] Run: systemctl --version
	I0407 12:46:01.270063 1429316 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0407 12:46:01.270101 1429316 client.go:168] LocalClient.Create starting
	I0407 12:46:01.270187 1429316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca.pem
	I0407 12:46:01.270218 1429316 main.go:141] libmachine: Decoding PEM data...
	I0407 12:46:01.270234 1429316 main.go:141] libmachine: Parsing certificate...
	I0407 12:46:01.270296 1429316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/cert.pem
	I0407 12:46:01.270319 1429316 main.go:141] libmachine: Decoding PEM data...
	I0407 12:46:01.270329 1429316 main.go:141] libmachine: Parsing certificate...
	I0407 12:46:01.270642 1429316 client.go:171] duration metric: took 532.06µs to LocalClient.Create
	I0407 12:46:01.270666 1429316 start.go:167] duration metric: took 613.883µs to libmachine.API.Create "minikube"
	I0407 12:46:01.270673 1429316 start.go:293] postStartSetup for "minikube" (driver="none")
	I0407 12:46:01.270708 1429316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:46:01.270753 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:46:01.280436 1429316 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 12:46:01.280458 1429316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 12:46:01.280466 1429316 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 12:46:01.282450 1429316 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0407 12:46:01.283786 1429316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1418173/.minikube/addons for local assets ...
	I0407 12:46:01.283847 1429316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1418173/.minikube/files for local assets ...
	I0407 12:46:01.283872 1429316 start.go:296] duration metric: took 13.189796ms for postStartSetup
	I0407 12:46:01.284520 1429316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json ...
	I0407 12:46:01.284674 1429316 start.go:128] duration metric: took 21.323169ms to createHost
	I0407 12:46:01.284690 1429316 start.go:83] releasing machines lock for "minikube", held for 21.433196ms
	I0407 12:46:01.285057 1429316 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 12:46:01.285154 1429316 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0407 12:46:01.287094 1429316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 12:46:01.287141 1429316 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:46:01.297196 1429316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 12:46:01.297229 1429316 start.go:495] detecting cgroup driver to use...
	I0407 12:46:01.297261 1429316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:46:01.297368 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:46:01.319217 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 12:46:01.329584 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 12:46:01.338895 1429316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 12:46:01.338957 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 12:46:01.349057 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:46:01.359932 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 12:46:01.375600 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:46:01.386405 1429316 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:46:01.396041 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 12:46:01.406577 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 12:46:01.429519 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 12:46:01.439514 1429316 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:46:01.448440 1429316 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:46:01.456361 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:01.690650 1429316 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0407 12:46:01.754996 1429316 start.go:495] detecting cgroup driver to use...
	I0407 12:46:01.755055 1429316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:46:01.755169 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:46:01.781838 1429316 exec_runner.go:51] Run: which cri-dockerd
	I0407 12:46:01.782866 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 12:46:01.791549 1429316 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0407 12:46:01.791585 1429316 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0407 12:46:01.791637 1429316 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0407 12:46:01.800329 1429316 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 12:46:01.800548 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1817254808 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0407 12:46:01.809824 1429316 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0407 12:46:02.026255 1429316 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0407 12:46:02.249916 1429316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 12:46:02.250098 1429316 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0407 12:46:02.250116 1429316 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0407 12:46:02.250166 1429316 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0407 12:46:02.259552 1429316 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0407 12:46:02.259746 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1779169503 /etc/docker/daemon.json
	I0407 12:46:02.268933 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:02.501531 1429316 exec_runner.go:51] Run: sudo systemctl restart docker
	I0407 12:46:02.848272 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 12:46:02.861572 1429316 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0407 12:46:02.879408 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:46:02.890750 1429316 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0407 12:46:03.122082 1429316 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0407 12:46:03.361507 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:03.590334 1429316 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0407 12:46:03.605891 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:46:03.618044 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:03.839254 1429316 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0407 12:46:03.911084 1429316 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 12:46:03.911171 1429316 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0407 12:46:03.912678 1429316 start.go:563] Will wait 60s for crictl version
	I0407 12:46:03.912723 1429316 exec_runner.go:51] Run: which crictl
	I0407 12:46:03.913606 1429316 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0407 12:46:03.947511 1429316 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.0.4
	RuntimeApiVersion:  v1
	I0407 12:46:03.947603 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0407 12:46:03.971036 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0407 12:46:03.995613 1429316 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
	I0407 12:46:03.995718 1429316 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0407 12:46:03.998437 1429316 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0407 12:46:03.999593 1429316 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:46:03.999705 1429316 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:46:03.999717 1429316 kubeadm.go:934] updating node { 10.132.0.4 8443 v1.32.2 docker true true} ...
	I0407 12:46:03.999847 1429316 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.132.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0407 12:46:03.999895 1429316 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0407 12:46:04.048035 1429316 cni.go:84] Creating CNI manager for ""
	I0407 12:46:04.048071 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:46:04.048086 1429316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:46:04.048111 1429316 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.132.0.4 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.132.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.132.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:46:04.048253 1429316 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.132.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "10.132.0.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.132.0.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:46:04.048321 1429316 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:46:04.057083 1429316 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0407 12:46:04.057170 1429316 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0407 12:46:04.065629 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0407 12:46:04.065684 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:46:04.065685 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0407 12:46:04.065755 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0407 12:46:04.065764 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0407 12:46:04.065802 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0407 12:46:04.077514 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0407 12:46:04.124513 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3196352494 /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 12:46:04.130593 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4160672640 /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 12:46:04.149975 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2575322256 /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 12:46:04.230292 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 12:46:04.239456 1429316 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0407 12:46:04.239485 1429316 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0407 12:46:04.239525 1429316 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0407 12:46:04.247941 1429316 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0407 12:46:04.248129 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2812554544 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0407 12:46:04.256651 1429316 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0407 12:46:04.256679 1429316 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0407 12:46:04.256714 1429316 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0407 12:46:04.264872 1429316 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:46:04.265044 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2861707748 /lib/systemd/system/kubelet.service
	I0407 12:46:04.273635 1429316 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0407 12:46:04.273784 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3026017713 /var/tmp/minikube/kubeadm.yaml.new
	I0407 12:46:04.282029 1429316 exec_runner.go:51] Run: grep 10.132.0.4	control-plane.minikube.internal$ /etc/hosts
	I0407 12:46:04.283624 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:04.517665 1429316 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0407 12:46:04.532121 1429316 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube for IP: 10.132.0.4
	I0407 12:46:04.532154 1429316 certs.go:194] generating shared ca certs ...
	I0407 12:46:04.532182 1429316 certs.go:226] acquiring lock for ca certs: {Name:mke037ea5f6110cd4db349ee47a4532de031e41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.532401 1429316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.key
	I0407 12:46:04.532475 1429316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.key
	I0407 12:46:04.532490 1429316 certs.go:256] generating profile certs ...
	I0407 12:46:04.532571 1429316 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key
	I0407 12:46:04.532593 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt with IP's: []
	I0407 12:46:04.746361 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt ...
	I0407 12:46:04.746398 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt: {Name:mkd685522b407e574e9a17242256ea962f13d180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.746567 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key ...
	I0407 12:46:04.746584 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key: {Name:mk93b3de66b65705ca976ab8fb0e07c53d19cd38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.746673 1429316 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f
	I0407 12:46:04.746690 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.132.0.4]
	I0407 12:46:04.946265 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f ...
	I0407 12:46:04.946301 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f: {Name:mkc92f9f9b71902112ff236a3fce9245b28fbc4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.946465 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f ...
	I0407 12:46:04.946486 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f: {Name:mk8e0d10049da8458969638f3be970030e3a7c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.946565 1429316 certs.go:381] copying /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f -> /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt
	I0407 12:46:04.946677 1429316 certs.go:385] copying /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f -> /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key
	I0407 12:46:04.946745 1429316 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key
	I0407 12:46:04.946768 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0407 12:46:05.422333 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt ...
	I0407 12:46:05.422367 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt: {Name:mk657c14bd9f3b8cdc778a995b4cc49084dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:05.422505 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key ...
	I0407 12:46:05.422521 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key: {Name:mk04945c273de7864e5113cfa901b08a2b911d34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:05.422716 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 12:46:05.422763 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca.pem (1082 bytes)
	I0407 12:46:05.422791 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/cert.pem (1123 bytes)
	I0407 12:46:05.422814 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/key.pem (1675 bytes)
	I0407 12:46:05.423465 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:46:05.423590 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3061948531 /var/lib/minikube/certs/ca.crt
	I0407 12:46:05.432860 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0407 12:46:05.433025 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3068014908 /var/lib/minikube/certs/ca.key
	I0407 12:46:05.443022 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:46:05.443199 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3928182364 /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 12:46:05.453837 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 12:46:05.453966 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2350601260 /var/lib/minikube/certs/proxy-client-ca.key
	I0407 12:46:05.463595 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0407 12:46:05.463752 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2667253322 /var/lib/minikube/certs/apiserver.crt
	I0407 12:46:05.473238 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 12:46:05.473362 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2852199116 /var/lib/minikube/certs/apiserver.key
	I0407 12:46:05.482563 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:46:05.482740 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3185536542 /var/lib/minikube/certs/proxy-client.crt
	I0407 12:46:05.491833 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 12:46:05.491981 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000808512 /var/lib/minikube/certs/proxy-client.key
	I0407 12:46:05.500441 1429316 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0407 12:46:05.500465 1429316 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.500497 1429316 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.508314 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:46:05.508471 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube385856075 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.517215 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:46:05.517362 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2963500488 /var/lib/minikube/kubeconfig
	I0407 12:46:05.526041 1429316 exec_runner.go:51] Run: openssl version
	I0407 12:46:05.528924 1429316 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:46:05.537534 1429316 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.538811 1429316 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Apr  7 12:46 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.538859 1429316 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.541631 1429316 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:46:05.552781 1429316 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:46:05.553844 1429316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:46:05.553891 1429316 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:46:05.553998 1429316 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 12:46:05.570270 1429316 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:46:05.579733 1429316 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:46:05.595767 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0407 12:46:05.617394 1429316 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:46:05.627797 1429316 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:46:05.627825 1429316 kubeadm.go:157] found existing configuration files:
	
	I0407 12:46:05.627872 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:46:05.636647 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:46:05.636704 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:46:05.644490 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:46:05.653066 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:46:05.653120 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:46:05.660877 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:46:05.670067 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:46:05.670133 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:46:05.678615 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:46:05.689345 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:46:05.689418 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:46:05.697526 1429316 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 12:46:05.733301 1429316 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:46:05.733366 1429316 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:46:05.761513 1429316 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0407 12:46:05.827926 1429316 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:46:05.827987 1429316 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:46:05.827995 1429316 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:46:05.828001 1429316 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:46:05.838908 1429316 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:46:05.842792 1429316 out.go:235]   - Generating certificates and keys ...
	I0407 12:46:05.842849 1429316 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:46:05.842866 1429316 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:46:05.929822 1429316 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:46:06.034156 1429316 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:46:06.137512 1429316 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:46:06.399738 1429316 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:46:06.658454 1429316 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:46:06.658837 1429316 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
	I0407 12:46:06.793515 1429316 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:46:06.793616 1429316 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
	I0407 12:46:07.111754 1429316 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:46:07.239104 1429316 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:46:07.374867 1429316 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:46:07.375054 1429316 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:46:07.516836 1429316 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:46:07.676713 1429316 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:46:08.039272 1429316 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:46:08.150766 1429316 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:46:08.340603 1429316 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:46:08.341788 1429316 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:46:08.344254 1429316 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:46:08.346695 1429316 out.go:235]   - Booting up control plane ...
	I0407 12:46:08.346729 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:46:08.346756 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:46:08.347211 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:46:08.372882 1429316 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:46:08.377541 1429316 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:46:08.377576 1429316 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:46:08.617762 1429316 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:46:08.617787 1429316 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:46:09.119698 1429316 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.893396ms
	I0407 12:46:09.119727 1429316 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:46:14.121618 1429316 kubeadm.go:310] [api-check] The API server is healthy after 5.001918177s
	I0407 12:46:14.134209 1429316 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:46:14.145166 1429316 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:46:14.166074 1429316 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:46:14.166105 1429316 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:46:14.173597 1429316 kubeadm.go:310] [bootstrap-token] Using token: p4kop0.df2qjc17ds7iaiam
	I0407 12:46:14.175343 1429316 out.go:235]   - Configuring RBAC rules ...
	I0407 12:46:14.175389 1429316 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:46:14.178620 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:46:14.184157 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:46:14.186768 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:46:14.189495 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:46:14.193735 1429316 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:46:14.528888 1429316 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:46:14.951790 1429316 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:46:15.528465 1429316 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:46:15.529302 1429316 kubeadm.go:310] 
	I0407 12:46:15.529328 1429316 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:46:15.529333 1429316 kubeadm.go:310] 
	I0407 12:46:15.529338 1429316 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:46:15.529342 1429316 kubeadm.go:310] 
	I0407 12:46:15.529346 1429316 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:46:15.529350 1429316 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:46:15.529376 1429316 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:46:15.529385 1429316 kubeadm.go:310] 
	I0407 12:46:15.529390 1429316 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:46:15.529394 1429316 kubeadm.go:310] 
	I0407 12:46:15.529398 1429316 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:46:15.529402 1429316 kubeadm.go:310] 
	I0407 12:46:15.529406 1429316 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:46:15.529410 1429316 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:46:15.529415 1429316 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:46:15.529422 1429316 kubeadm.go:310] 
	I0407 12:46:15.529428 1429316 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:46:15.529432 1429316 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:46:15.529434 1429316 kubeadm.go:310] 
	I0407 12:46:15.529439 1429316 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p4kop0.df2qjc17ds7iaiam \
	I0407 12:46:15.529443 1429316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a0218baebfbd26086bf2c1fda945fcf4b4d1b776503555f789838ba1e80aed9c \
	I0407 12:46:15.529446 1429316 kubeadm.go:310] 	--control-plane 
	I0407 12:46:15.529448 1429316 kubeadm.go:310] 
	I0407 12:46:15.529451 1429316 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:46:15.529454 1429316 kubeadm.go:310] 
	I0407 12:46:15.529456 1429316 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p4kop0.df2qjc17ds7iaiam \
	I0407 12:46:15.529459 1429316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a0218baebfbd26086bf2c1fda945fcf4b4d1b776503555f789838ba1e80aed9c 
	I0407 12:46:15.532573 1429316 cni.go:84] Creating CNI manager for ""
	I0407 12:46:15.532610 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:46:15.534535 1429316 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 12:46:15.535691 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0407 12:46:15.547497 1429316 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 12:46:15.547645 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3694858315 /etc/cni/net.d/1-k8s.conflist
	I0407 12:46:15.557811 1429316 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 12:46:15.557870 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:15.557891 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent minikube.k8s.io/updated_at=2025_04_07T12_46_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0407 12:46:15.566997 1429316 ops.go:34] apiserver oom_adj: -16
	I0407 12:46:15.628992 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:16.129805 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:16.629609 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:17.129288 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:17.629737 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:18.129916 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:18.629214 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:19.129880 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:19.629695 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:20.129764 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:20.197626 1429316 kubeadm.go:1113] duration metric: took 4.639807769s to wait for elevateKubeSystemPrivileges
	I0407 12:46:20.197664 1429316 kubeadm.go:394] duration metric: took 14.643775896s to StartCluster
	I0407 12:46:20.197703 1429316 settings.go:142] acquiring lock: {Name:mk1a74bdc4efde062e045448da0c418856eac793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:20.197785 1429316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 12:46:20.198485 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/kubeconfig: {Name:mk79daf009e4d10ee19338674231a661a076a223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:20.198740 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 12:46:20.198900 1429316 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:true volumesnapshots:true yakd:true]
	I0407 12:46:20.199009 1429316 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:46:20.199034 1429316 addons.go:69] Setting yakd=true in profile "minikube"
	I0407 12:46:20.199052 1429316 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0407 12:46:20.199061 1429316 addons.go:238] Setting addon yakd=true in "minikube"
	I0407 12:46:20.199070 1429316 addons.go:69] Setting amd-gpu-device-plugin=true in profile "minikube"
	I0407 12:46:20.199083 1429316 addons.go:238] Setting addon amd-gpu-device-plugin=true in "minikube"
	I0407 12:46:20.199100 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199106 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199249 1429316 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0407 12:46:20.199278 1429316 addons.go:238] Setting addon cloud-spanner=true in "minikube"
	I0407 12:46:20.199297 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199327 1429316 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0407 12:46:20.199353 1429316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0407 12:46:20.199883 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.199907 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.199922 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.199941 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.199942 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.199982 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.200038 1429316 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0407 12:46:20.200132 1429316 addons.go:238] Setting addon csi-hostpath-driver=true in "minikube"
	I0407 12:46:20.200175 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.200269 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.200284 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.200314 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.200885 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.200911 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.200946 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.201033 1429316 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0407 12:46:20.201064 1429316 mustload.go:65] Loading cluster: minikube
	I0407 12:46:20.201270 1429316 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:46:20.202003 1429316 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0407 12:46:20.202025 1429316 addons.go:238] Setting addon storage-provisioner=true in "minikube"
	I0407 12:46:20.202177 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.202879 1429316 out.go:177] * Configuring local host environment ...
	I0407 12:46:20.203400 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.203417 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.203451 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0407 12:46:20.204888 1429316 out.go:270] * 
	W0407 12:46:20.204905 1429316 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0407 12:46:20.204912 1429316 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0407 12:46:20.204919 1429316 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0407 12:46:20.204925 1429316 out.go:270] * 
	W0407 12:46:20.204969 1429316 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0407 12:46:20.204976 1429316 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0407 12:46:20.204981 1429316 out.go:270] * 
	W0407 12:46:20.205013 1429316 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0407 12:46:20.205020 1429316 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0407 12:46:20.205025 1429316 out.go:270] * 
	W0407 12:46:20.205032 1429316 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0407 12:46:20.205059 1429316 start.go:235] Will wait 6m0s for node &{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:46:20.205947 1429316 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0407 12:46:20.205967 1429316 addons.go:238] Setting addon nvidia-device-plugin=true in "minikube"
	I0407 12:46:20.205997 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206023 1429316 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0407 12:46:20.206045 1429316 addons.go:238] Setting addon metrics-server=true in "minikube"
	I0407 12:46:20.206080 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206445 1429316 addons.go:69] Setting registry=true in profile "minikube"
	I0407 12:46:20.206466 1429316 addons.go:238] Setting addon registry=true in "minikube"
	I0407 12:46:20.206547 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206622 1429316 addons.go:69] Setting volcano=true in profile "minikube"
	I0407 12:46:20.206644 1429316 out.go:177] * Verifying Kubernetes components...
	I0407 12:46:20.206669 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206689 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206717 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206727 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.206734 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206780 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.206841 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206865 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206903 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.206918 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206936 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.207006 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.207280 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.207337 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206656 1429316 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0407 12:46:20.207373 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.207390 1429316 addons.go:238] Setting addon volumesnapshots=true in "minikube"
	I0407 12:46:20.207430 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206647 1429316 addons.go:238] Setting addon volcano=true in "minikube"
	I0407 12:46:20.207542 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199062 1429316 addons.go:238] Setting addon inspektor-gadget=true in "minikube"
	I0407 12:46:20.207852 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.208086 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.208111 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.208142 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.208317 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.208378 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.208278 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:20.208509 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.211997 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.212040 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.212080 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.222010 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.223069 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.223578 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.224955 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.225989 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.243468 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.261161 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.243475 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.262094 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.262176 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.264542 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.264606 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.264844 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.264909 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.266233 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.266293 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.269905 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.276799 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.276835 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.278142 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.278955 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.279018 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.279925 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.282436 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.284367 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.286078 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0407 12:46:20.287484 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0407 12:46:20.289453 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.289485 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.291042 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0407 12:46:20.292332 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.292410 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.293848 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0407 12:46:20.294863 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.294880 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.294889 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.295689 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.295875 1429316 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0407 12:46:20.296807 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.296874 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.297128 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:46:20.297166 1429316 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0407 12:46:20.297339 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3526351157 /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:46:20.297496 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0407 12:46:20.299046 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0407 12:46:20.300004 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.300028 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.301485 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0407 12:46:20.302071 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.302142 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.303806 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.303862 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.303964 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0407 12:46:20.304170 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.304219 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.304379 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.304394 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.305346 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:46:20.305381 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0407 12:46:20.305539 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3653908400 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:46:20.306131 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.306159 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.309295 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.309372 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.310206 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.312174 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.313237 1429316 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0407 12:46:20.314436 1429316 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0407 12:46:20.319103 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.319175 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.319721 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 12:46:20.321916 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.321946 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.322334 1429316 out.go:177]   - Using image docker.io/registry:2.8.3
	I0407 12:46:20.322678 1429316 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:46:20.322713 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0407 12:46:20.322868 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.322897 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.323017 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube252749164 /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:46:20.324672 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:46:20.324696 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:46:20.324702 1429316 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0407 12:46:20.324712 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0407 12:46:20.324836 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1922338861 /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:46:20.324992 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2274747492 /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:46:20.326202 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.326256 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.326328 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.326347 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.327053 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.327340 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.327365 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.327998 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.329088 1429316 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
	I0407 12:46:20.330035 1429316 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0407 12:46:20.332248 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.332465 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.334867 1429316 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0407 12:46:20.334922 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0407 12:46:20.335101 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1711987977 /etc/kubernetes/addons/deployment.yaml
	I0407 12:46:20.336209 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.336234 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.336268 1429316 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0407 12:46:20.336319 1429316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 12:46:20.336931 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.336954 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.340717 1429316 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:46:20.340791 1429316 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0407 12:46:20.340948 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1569781149 /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:46:20.340978 1429316 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.341009 1429316 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0407 12:46:20.341016 1429316 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.341047 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.340768 1429316 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
	I0407 12:46:20.345492 1429316 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
	I0407 12:46:20.345582 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.345760 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:46:20.345786 1429316 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0407 12:46:20.345907 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2676008935 /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:46:20.346908 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:46:20.350669 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.350951 1429316 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:46:20.350997 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
	I0407 12:46:20.352791 1429316 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0407 12:46:20.356470 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:46:20.356511 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0407 12:46:20.357258 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3825462376 /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:46:20.357984 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1913311062 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:46:20.358967 1429316 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0407 12:46:20.359621 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:46:20.359658 1429316 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:46:20.359664 1429316 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0407 12:46:20.359691 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0407 12:46:20.359845 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2952795493 /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:46:20.360495 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube189832616 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:46:20.361524 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:46:20.361558 1429316 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 12:46:20.365172 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4120191610 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:46:20.374944 1429316 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:46:20.374992 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0407 12:46:20.375186 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2179279931 /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:46:20.379041 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.379374 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.380385 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 12:46:20.380560 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3051500119 /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.385196 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.387870 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0407 12:46:20.388702 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0407 12:46:20.390037 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:46:20.390067 1429316 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0407 12:46:20.390187 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31446616 /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:46:20.390764 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:46:20.390800 1429316 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0407 12:46:20.391569 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3779181310 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:46:20.394337 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:46:20.398769 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:46:20.398806 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0407 12:46:20.398933 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube906689499 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:46:20.402373 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:46:20.402640 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.402664 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.405039 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:46:20.408207 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.409282 1429316 addons.go:238] Setting addon default-storageclass=true in "minikube"
	I0407 12:46:20.409335 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.410204 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.411381 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.411413 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.411457 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.416651 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:46:20.416753 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0407 12:46:20.416972 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2826717481 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:46:20.419552 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:46:20.419587 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0407 12:46:20.419724 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3788447769 /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:46:20.421654 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:46:20.421683 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0407 12:46:20.422435 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1998580135 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:46:20.425248 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:46:20.425278 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0407 12:46:20.425416 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3456691057 /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:46:20.470027 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:46:20.471917 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:46:20.471958 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0407 12:46:20.472122 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2976229442 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:46:20.472656 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:46:20.472682 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0407 12:46:20.472807 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube422639263 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:46:20.474651 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:46:20.497912 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:46:20.497967 1429316 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 12:46:20.498143 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3106965212 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:46:20.514273 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.536535 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:46:20.536573 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0407 12:46:20.536697 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3748851246 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:46:20.558030 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:46:20.558071 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0407 12:46:20.558226 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3460275038 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:46:20.583644 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:46:20.583701 1429316 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 12:46:20.583856 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4292587570 /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:46:20.602264 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:46:20.613494 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:46:20.613554 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0407 12:46:20.613690 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube210220550 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:46:20.697780 1429316 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0407 12:46:20.710202 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.710292 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.726957 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:46:20.727004 1429316 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0407 12:46:20.727156 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1419895819 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:46:20.758069 1429316 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent" to be "Ready" ...
	I0407 12:46:20.760314 1429316 node_ready.go:49] node "ubuntu-20-agent" has status "Ready":"True"
	I0407 12:46:20.760337 1429316 node_ready.go:38] duration metric: took 2.226937ms for node "ubuntu-20-agent" to be "Ready" ...
	I0407 12:46:20.760348 1429316 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:46:20.776617 1429316 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:20.776664 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0407 12:46:20.779959 1429316 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:20.786355 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3871166456 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:20.823708 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:46:20.823745 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0407 12:46:20.823889 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3128803471 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:46:20.824070 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.824088 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.831089 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.831141 1429316 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.831160 1429316 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0407 12:46:20.831168 1429316 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.831207 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.856878 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:46:20.856920 1429316 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0407 12:46:20.859857 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube168228944 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:46:20.883053 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:20.886655 1429316 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 12:46:20.886842 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2746251177 /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.916503 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:46:20.916548 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0407 12:46:20.916700 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4173374182 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:46:20.925691 1429316 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0407 12:46:20.958076 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.987499 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:46:20.987568 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0407 12:46:20.987741 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2420038711 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:46:21.040807 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:46:21.040860 1429316 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0407 12:46:21.041041 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3268416938 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:46:21.136865 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:46:21.409763 1429316 addons.go:479] Verifying addon registry=true in "minikube"
	I0407 12:46:21.412264 1429316 out.go:177] * Verifying registry addon...
	I0407 12:46:21.415321 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0407 12:46:21.418713 1429316 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0407 12:46:21.418736 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:21.433549 1429316 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0407 12:46:21.570206 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.159952141s)
	I0407 12:46:21.648841 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.243753231s)
	I0407 12:46:21.711886 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.24178473s)
	I0407 12:46:21.717694 1429316 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0407 12:46:21.720411 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118083477s)
	I0407 12:46:21.720456 1429316 addons.go:479] Verifying addon metrics-server=true in "minikube"
	I0407 12:46:21.922074 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:22.419286 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:22.595941 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.712830875s)
	W0407 12:46:22.595992 1429316 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:46:22.596030 1429316 retry.go:31] will retry after 202.751969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:46:22.786098 1429316 pod_ready.go:103] pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace has status "Ready":"False"
	I0407 12:46:22.799303 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:22.919554 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:23.425450 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:23.456836 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.319887875s)
	I0407 12:46:23.456881 1429316 addons.go:479] Verifying addon csi-hostpath-driver=true in "minikube"
	I0407 12:46:23.462996 1429316 out.go:177] * Verifying csi-hostpath-driver addon...
	I0407 12:46:23.467517 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0407 12:46:23.500910 1429316 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0407 12:46:23.500946 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:23.678635 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.276218032s)
	I0407 12:46:23.919571 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:23.987515 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:24.419253 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:24.471440 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:24.919038 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:24.971484 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:25.285637 1429316 pod_ready.go:93] pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:25.285663 1429316 pod_ready.go:82] duration metric: took 4.505662003s for pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:25.285673 1429316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:25.419494 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:25.521115 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:25.533187 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.733792804s)
	I0407 12:46:25.918839 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:25.971084 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:26.419692 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:26.472363 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:26.919941 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:27.020780 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:27.108165 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0407 12:46:27.108484 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1890754882 /var/lib/minikube/google_application_credentials.json
	I0407 12:46:27.119734 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0407 12:46:27.119899 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2219183012 /var/lib/minikube/google_cloud_project
	I0407 12:46:27.131325 1429316 addons.go:238] Setting addon gcp-auth=true in "minikube"
	I0407 12:46:27.131402 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:27.132217 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:27.132247 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:27.132286 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:27.152075 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:27.163123 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:27.163212 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:27.172494 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:27.172531 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:27.177380 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:27.177462 1429316 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0407 12:46:27.180770 1429316 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:46:27.182360 1429316 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0407 12:46:27.183717 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:46:27.183761 1429316 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0407 12:46:27.183920 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1049495724 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:46:27.196439 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:46:27.196488 1429316 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0407 12:46:27.196686 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube693064940 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:46:27.206666 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:46:27.206702 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0407 12:46:27.206855 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube58906347 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:46:27.218711 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:46:27.291476 1429316 pod_ready.go:93] pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:27.291502 1429316 pod_ready.go:82] duration metric: took 2.005821765s for pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.291519 1429316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.295922 1429316 pod_ready.go:93] pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:27.295949 1429316 pod_ready.go:82] duration metric: took 4.420137ms for pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.295962 1429316 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.299925 1429316 pod_ready.go:93] pod "etcd-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:27.299965 1429316 pod_ready.go:82] duration metric: took 3.992923ms for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.299978 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.419975 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:27.471432 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:27.920057 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:27.962665 1429316 addons.go:479] Verifying addon gcp-auth=true in "minikube"
	I0407 12:46:27.965706 1429316 out.go:177] * Verifying gcp-auth addon...
	I0407 12:46:27.968051 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0407 12:46:28.020196 1429316 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:46:28.020499 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:28.420045 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:28.471286 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:28.805902 1429316 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:28.805928 1429316 pod_ready.go:82] duration metric: took 1.505941321s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.805938 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.811208 1429316 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:28.811254 1429316 pod_ready.go:82] duration metric: took 5.307688ms for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.811269 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4ktb9" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.889612 1429316 pod_ready.go:93] pod "kube-proxy-4ktb9" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:28.889639 1429316 pod_ready.go:82] duration metric: took 78.35951ms for pod "kube-proxy-4ktb9" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.889652 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.919192 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:29.020417 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:29.289605 1429316 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:29.289637 1429316 pod_ready.go:82] duration metric: took 399.974892ms for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:29.289653 1429316 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:29.419981 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:29.471030 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:29.918490 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:29.971448 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:30.419178 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:30.471563 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:30.918873 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:31.020301 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:31.296406 1429316 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"False"
	I0407 12:46:31.419473 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:31.471850 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:31.919476 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:31.971849 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:32.419096 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:32.471663 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:32.919835 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:32.971160 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:33.419000 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:33.519607 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:33.794578 1429316 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"False"
	I0407 12:46:33.918521 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:33.989387 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:34.419833 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:34.470704 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:34.919739 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:35.020689 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:35.295351 1429316 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:35.295382 1429316 pod_ready.go:82] duration metric: took 6.005719807s for pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:35.295394 1429316 pod_ready.go:39] duration metric: took 14.53503087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:46:35.295421 1429316 api_server.go:52] waiting for apiserver process to appear ...
	I0407 12:46:35.295487 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:35.314133 1429316 api_server.go:72] duration metric: took 15.109039821s to wait for apiserver process to appear ...
	I0407 12:46:35.314163 1429316 api_server.go:88] waiting for apiserver healthz status ...
	I0407 12:46:35.314188 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:35.317933 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:35.318854 1429316 api_server.go:141] control plane version: v1.32.2
	I0407 12:46:35.318881 1429316 api_server.go:131] duration metric: took 4.708338ms to wait for apiserver health ...
	I0407 12:46:35.318889 1429316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 12:46:35.322611 1429316 system_pods.go:59] 17 kube-system pods found
	I0407 12:46:35.322656 1429316 system_pods.go:61] "amd-gpu-device-plugin-86df5" [ba9ab47c-61f0-4711-959e-29c976ef7c89] Running
	I0407 12:46:35.322666 1429316 system_pods.go:61] "coredns-668d6bf9bc-28dsp" [c3edd2f1-75f3-4345-9544-93c2a6f0f5d3] Running
	I0407 12:46:35.322677 1429316 system_pods.go:61] "csi-hostpath-attacher-0" [8f7840f4-1626-4a29-be20-6998152854a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:46:35.322690 1429316 system_pods.go:61] "csi-hostpath-resizer-0" [06f1b8f1-d561-44df-8d0e-e5191281a47f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:46:35.322700 1429316 system_pods.go:61] "csi-hostpathplugin-n7jq8" [7f9c7966-52c5-4bcb-84c7-1915efadd81b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:46:35.322708 1429316 system_pods.go:61] "etcd-ubuntu-20-agent" [13ea58ff-509e-403d-90ae-292ab15ea901] Running
	I0407 12:46:35.322712 1429316 system_pods.go:61] "kube-apiserver-ubuntu-20-agent" [8832ae71-7c9c-4d9e-a74d-d2dc87fcc0a1] Running
	I0407 12:46:35.322718 1429316 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent" [73ba7bcb-e73b-4403-a7d7-9532589d0ab9] Running
	I0407 12:46:35.322723 1429316 system_pods.go:61] "kube-proxy-4ktb9" [f218d86a-31ef-4897-b9e4-d53c0a6eb365] Running
	I0407 12:46:35.322728 1429316 system_pods.go:61] "kube-scheduler-ubuntu-20-agent" [58f3fb78-0ec4-41c5-a20f-9a0df3c2f9ce] Running
	I0407 12:46:35.322741 1429316 system_pods.go:61] "metrics-server-7fbb699795-kfmft" [723d2ed5-e3cb-4cc3-80d7-62e3c337502a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 12:46:35.322746 1429316 system_pods.go:61] "nvidia-device-plugin-daemonset-qtjqk" [861c99d3-8db6-4690-9b9a-9445eb29a1b1] Running
	I0407 12:46:35.322754 1429316 system_pods.go:61] "registry-6c88467877-kwnrb" [4fbcb06c-10f2-48eb-ae63-5c09b49e6099] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:46:35.322762 1429316 system_pods.go:61] "registry-proxy-gpv45" [1ee0f741-4f8b-4063-832c-bfc311b610aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:46:35.322772 1429316 system_pods.go:61] "snapshot-controller-68b874b76f-7465t" [bacd4eea-22af-4b2e-a3c3-c11adcd9d06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.322782 1429316 system_pods.go:61] "snapshot-controller-68b874b76f-bnf6p" [36a09b5c-f06d-41d9-b331-82f98e9152c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.322787 1429316 system_pods.go:61] "storage-provisioner" [18b8d7ec-1526-45c5-8660-6ab5bcb5dde2] Running
	I0407 12:46:35.322795 1429316 system_pods.go:74] duration metric: took 3.900184ms to wait for pod list to return data ...
	I0407 12:46:35.322803 1429316 default_sa.go:34] waiting for default service account to be created ...
	I0407 12:46:35.325143 1429316 default_sa.go:45] found service account: "default"
	I0407 12:46:35.325165 1429316 default_sa.go:55] duration metric: took 2.356952ms for default service account to be created ...
	I0407 12:46:35.325173 1429316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 12:46:35.328166 1429316 system_pods.go:86] 17 kube-system pods found
	I0407 12:46:35.328197 1429316 system_pods.go:89] "amd-gpu-device-plugin-86df5" [ba9ab47c-61f0-4711-959e-29c976ef7c89] Running
	I0407 12:46:35.328204 1429316 system_pods.go:89] "coredns-668d6bf9bc-28dsp" [c3edd2f1-75f3-4345-9544-93c2a6f0f5d3] Running
	I0407 12:46:35.328211 1429316 system_pods.go:89] "csi-hostpath-attacher-0" [8f7840f4-1626-4a29-be20-6998152854a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:46:35.328218 1429316 system_pods.go:89] "csi-hostpath-resizer-0" [06f1b8f1-d561-44df-8d0e-e5191281a47f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:46:35.328232 1429316 system_pods.go:89] "csi-hostpathplugin-n7jq8" [7f9c7966-52c5-4bcb-84c7-1915efadd81b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:46:35.328239 1429316 system_pods.go:89] "etcd-ubuntu-20-agent" [13ea58ff-509e-403d-90ae-292ab15ea901] Running
	I0407 12:46:35.328243 1429316 system_pods.go:89] "kube-apiserver-ubuntu-20-agent" [8832ae71-7c9c-4d9e-a74d-d2dc87fcc0a1] Running
	I0407 12:46:35.328248 1429316 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent" [73ba7bcb-e73b-4403-a7d7-9532589d0ab9] Running
	I0407 12:46:35.328251 1429316 system_pods.go:89] "kube-proxy-4ktb9" [f218d86a-31ef-4897-b9e4-d53c0a6eb365] Running
	I0407 12:46:35.328262 1429316 system_pods.go:89] "kube-scheduler-ubuntu-20-agent" [58f3fb78-0ec4-41c5-a20f-9a0df3c2f9ce] Running
	I0407 12:46:35.328271 1429316 system_pods.go:89] "metrics-server-7fbb699795-kfmft" [723d2ed5-e3cb-4cc3-80d7-62e3c337502a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 12:46:35.328275 1429316 system_pods.go:89] "nvidia-device-plugin-daemonset-qtjqk" [861c99d3-8db6-4690-9b9a-9445eb29a1b1] Running
	I0407 12:46:35.328280 1429316 system_pods.go:89] "registry-6c88467877-kwnrb" [4fbcb06c-10f2-48eb-ae63-5c09b49e6099] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:46:35.328289 1429316 system_pods.go:89] "registry-proxy-gpv45" [1ee0f741-4f8b-4063-832c-bfc311b610aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:46:35.328300 1429316 system_pods.go:89] "snapshot-controller-68b874b76f-7465t" [bacd4eea-22af-4b2e-a3c3-c11adcd9d06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.328315 1429316 system_pods.go:89] "snapshot-controller-68b874b76f-bnf6p" [36a09b5c-f06d-41d9-b331-82f98e9152c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.328320 1429316 system_pods.go:89] "storage-provisioner" [18b8d7ec-1526-45c5-8660-6ab5bcb5dde2] Running
	I0407 12:46:35.328331 1429316 system_pods.go:126] duration metric: took 3.151221ms to wait for k8s-apps to be running ...
	I0407 12:46:35.328339 1429316 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 12:46:35.328391 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:46:35.342621 1429316 system_svc.go:56] duration metric: took 14.266686ms WaitForService to wait for kubelet
	I0407 12:46:35.342652 1429316 kubeadm.go:582] duration metric: took 15.137567518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:46:35.342672 1429316 node_conditions.go:102] verifying NodePressure condition ...
	I0407 12:46:35.345647 1429316 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0407 12:46:35.345689 1429316 node_conditions.go:123] node cpu capacity is 8
	I0407 12:46:35.345708 1429316 node_conditions.go:105] duration metric: took 3.029456ms to run NodePressure ...
	I0407 12:46:35.345725 1429316 start.go:241] waiting for startup goroutines ...
	I0407 12:46:35.418575 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:35.471738 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:35.919460 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:35.971459 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:36.418927 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:36.470944 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:36.920012 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:36.971236 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:37.419625 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:37.471551 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:37.919187 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:37.971281 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:38.419700 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:38.471414 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:38.919826 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:38.971034 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:39.419257 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:39.471577 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:39.919763 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:39.970822 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:40.419580 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:40.471764 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:40.919389 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:40.971543 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:41.418325 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:41.471154 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:41.919369 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:41.971517 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:42.419213 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:42.471390 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:42.919024 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:43.020384 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:43.419813 1429316 kapi.go:107] duration metric: took 22.004486403s to wait for kubernetes.io/minikube-addons=registry ...
	I0407 12:46:43.471031 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:43.972893 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:44.472004 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:44.971721 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:45.471738 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:45.972198 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:46.472443 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:46.972278 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:47.483667 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:47.971419 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:48.472169 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:48.976645 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:49.471072 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:49.971622 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:50.471297 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:50.972415 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:51.471308 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:51.972434 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:52.471555 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:52.975728 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:53.471488 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:53.971405 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:54.471915 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:54.972725 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:55.471662 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:56.020761 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:56.471703 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:56.972347 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:57.471091 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:57.972508 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:58.471079 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:58.972451 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:59.471337 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:59.972044 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:00.471100 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:00.972307 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:01.472123 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:01.972205 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:02.472657 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:02.972119 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:03.517910 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:03.972052 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:04.472123 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:04.972034 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:05.471642 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:05.971701 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:06.471445 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:06.971897 1429316 kapi.go:107] duration metric: took 43.504396595s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0407 12:47:49.972271 1429316 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:47:49.972299 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:50.471070 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:50.971560 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:51.472444 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:51.971704 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:52.472395 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:52.977847 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:53.471523 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:53.972000 1429316 kapi.go:107] duration metric: took 1m26.003943819s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0407 12:47:53.973797 1429316 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0407 12:47:53.975209 1429316 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0407 12:47:53.976604 1429316 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0407 12:47:53.978619 1429316 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, inspektor-gadget, yakd, metrics-server, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0407 12:47:53.980134 1429316 addons.go:514] duration metric: took 1m33.781240974s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass inspektor-gadget yakd metrics-server volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0407 12:47:53.980187 1429316 start.go:246] waiting for cluster config update ...
	I0407 12:47:53.980213 1429316 start.go:255] writing updated cluster config ...
	I0407 12:47:53.980556 1429316 exec_runner.go:51] Run: rm -f paused
	I0407 12:47:54.030053 1429316 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 12:47:54.031911 1429316 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2025-02-07 00:17:37 UTC, end at Mon 2025-04-07 12:53:54 UTC. --
	Apr 07 12:47:36 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:36.759946538Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:47:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:43.232636667Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
	Apr 07 12:47:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:43.744910893Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:47:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:43.747018673Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:47:50 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:47:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1e7d13c91330dcf53cae5ee7728e5b9a824e936c78a34d13fd9b3b31cde6e35a/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Apr 07 12:47:50 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:47:50.493780846Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Apr 07 12:47:52 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:47:52Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Apr 07 12:48:18 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:18.226947760Z" level=warning msg="reference for unknown type: " digest="sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3" remote="docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
	Apr 07 12:48:18 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:18.741344775Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:48:18 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:18.743579696Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:48:25 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:25.226866414Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
	Apr 07 12:48:25 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:25.737717502Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:48:25 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:48:25.739857217Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:49:41 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:41.236422558Z" level=warning msg="reference for unknown type: " digest="sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3" remote="docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
	Apr 07 12:49:42 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:42.056850510Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:49:42 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:49:42Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: Pulling from volcanosh/vc-controller-manager"
	Apr 07 12:49:55 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:55.226083604Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
	Apr 07 12:49:55 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:55.738260649Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:49:55 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:49:55.740361300Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:52:32 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:32.229990470Z" level=warning msg="reference for unknown type: " digest="sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3" remote="docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
	Apr 07 12:52:33 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:33.045353202Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:52:33 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:52:33Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: docker.io/volcanosh/vc-controller-manager@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3: Pulling from volcanosh/vc-controller-manager"
	Apr 07 12:52:42 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:42.226132974Z" level=warning msg="reference for unknown type: " digest="sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86" remote="docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
	Apr 07 12:52:43 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:52:43.041209701Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:52:43 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:52:43Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86: docker.io/volcanosh/vc-scheduler@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86: Pulling from volcanosh/vc-scheduler"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	43612f6a057cd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 6 minutes ago       Running             gcp-auth                                 0                   1e7d13c91330d       gcp-auth-cd9db85c-jmrjf
	10fae591b8f52       volcanosh/vc-webhook-manager@sha256:2ceea91a5f05a366955f20cb1ab266b4732f906a205cb2e3f5930cf93335aeee                                         6 minutes ago       Running             admission                                0                   1bf5b4675c5d6       volcano-admission-75d8f6b5c-pldpl
	d8d4df3245c1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	9e774f36f36c9       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          6 minutes ago       Running             csi-provisioner                          0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	14093b9eed3cd       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            6 minutes ago       Running             liveness-probe                           0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	84f7a19f6f36c       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           6 minutes ago       Running             hostpath                                 0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	4fa315740091f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                6 minutes ago       Running             node-driver-registrar                    0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	647294f13c314       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              6 minutes ago       Running             csi-resizer                              0                   050e14ae928f5       csi-hostpath-resizer-0
	ba7ce3888e0c5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   6 minutes ago       Running             csi-external-health-monitor-controller   0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	2b2cd10e8243c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             6 minutes ago       Running             csi-attacher                             0                   ddc86519dee5d       csi-hostpath-attacher-0
	fd738334ee3ba       volcanosh/vc-webhook-manager@sha256:2ceea91a5f05a366955f20cb1ab266b4732f906a205cb2e3f5930cf93335aeee                                         7 minutes ago       Exited              main                                     0                   f1d6f87bba138       volcano-admission-init-4bqwh
	a247be89a521f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      7 minutes ago       Running             volume-snapshot-controller               0                   10f25a02c6c25       snapshot-controller-68b874b76f-7465t
	6320ac7c7873b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      7 minutes ago       Running             volume-snapshot-controller               0                   cee952ace26d6       snapshot-controller-68b874b76f-bnf6p
	fdd971918b4ba       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:886412e63d6c580c50b3b7b59eee709a870768a7b5d0d9c27d66fe2a32c555e0                            7 minutes ago       Running             gadget                                   0                   87e95256e8189       gadget-qfz76
	3cde9dbb13733       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        7 minutes ago       Running             yakd                                     0                   17c37c1766f9a       yakd-dashboard-575dd5996b-qf5qb
	a5658dd8aadd5       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        7 minutes ago       Running             metrics-server                           0                   4e9eb194ca166       metrics-server-7fbb699795-kfmft
	ae8585906fcb9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:60ab3508367ad093b4b891231572577371a29f838d61e64d7f7d093d961c862c                              7 minutes ago       Running             registry-proxy                           0                   2d13057c5cdc3       registry-proxy-gpv45
	3a2cbb8e4e131       registry@sha256:319881be2ee9e345d5837d15842a04268de6a139e23be42654fc7664fc6eaf52                                                             7 minutes ago       Running             registry                                 0                   8817996a24643       registry-6c88467877-kwnrb
	b7c45376b2746       gcr.io/cloud-spanner-emulator/emulator@sha256:a9c7274e55bba48a4f5bec813a11087d9f2e3a3f7e583dae9873aae2ec17f125                               7 minutes ago       Running             cloud-spanner-emulator                   0                   96406b22e6497       cloud-spanner-emulator-cc9755fc7-8d2gd
	08a692aaf85f6       nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea                                     7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   158136d890242       nvidia-device-plugin-daemonset-qtjqk
	28e171950f5a7       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               7 minutes ago       Running             amd-gpu-device-plugin                    0                   0cccbb3588406       amd-gpu-device-plugin-86df5
	9367d6480bcd3       6e38f40d628db                                                                                                                                7 minutes ago       Running             storage-provisioner                      0                   4e46329d24f22       storage-provisioner
	e6de974948a2b       f1332858868e1                                                                                                                                7 minutes ago       Running             kube-proxy                               0                   7cbe52af79cd0       kube-proxy-4ktb9
	634b0f31bf167       c69fa2e9cbf5f                                                                                                                                7 minutes ago       Running             coredns                                  0                   fb409e8883373       coredns-668d6bf9bc-28dsp
	8e962b9f09173       d8e673e7c9983                                                                                                                                7 minutes ago       Running             kube-scheduler                           0                   0cc01a4584319       kube-scheduler-ubuntu-20-agent
	1b21328ae243e       85b7a174738ba                                                                                                                                7 minutes ago       Running             kube-apiserver                           0                   3da9550e5056a       kube-apiserver-ubuntu-20-agent
	e23f65eeb6aff       a9e7e6b294baf                                                                                                                                7 minutes ago       Running             etcd                                     0                   016f56a70aaee       etcd-ubuntu-20-agent
	953db0d2f82d9       b6a454c5a800d                                                                                                                                7 minutes ago       Running             kube-controller-manager                  0                   149fe9b8110db       kube-controller-manager-ubuntu-20-agent
	
	
	==> coredns [634b0f31bf16] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 876af57068f747144f204884e843f6792435faec005aab1f10bd81e6ffca54e010e4374994d8f544c4f6711272ab5662d0892980e63ccc3ba8ba9e3fbcc5e4d9
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43165 - 33942 "HINFO IN 432949529890596107.8050361272252031817. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021642899s
	[INFO] 10.244.0.24:33042 - 27922 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000389816s
	[INFO] 10.244.0.24:42171 - 38582 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184446s
	[INFO] 10.244.0.24:56803 - 17108 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118517s
	[INFO] 10.244.0.24:36839 - 60695 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170938s
	[INFO] 10.244.0.24:48923 - 36870 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128602s
	[INFO] 10.244.0.24:43224 - 14793 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199417s
	[INFO] 10.244.0.24:40445 - 11974 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00413958s
	[INFO] 10.244.0.24:38595 - 36532 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004195152s
	[INFO] 10.244.0.24:33576 - 36108 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003551961s
	[INFO] 10.244.0.24:44447 - 31922 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004805135s
	[INFO] 10.244.0.24:42741 - 32070 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003188282s
	[INFO] 10.244.0.24:35696 - 46424 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00369519s
	[INFO] 10.244.0.24:40570 - 13844 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002311578s
	[INFO] 10.244.0.24:45311 - 54943 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.002645389s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_46_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:46:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 12:53:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:51:50 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:51:50 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:51:50 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:51:50 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.132.0.4
	  Hostname:    ubuntu-20-agent
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                591c9f12-2938-3743-e2bf-c56a050d43d1
	  Boot ID:                    32c262e1-f080-4c3c-9cad-9adf7e4991ef
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-cc9755fc7-8d2gd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  gadget                      gadget-qfz76                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  gcp-auth                    gcp-auth-cd9db85c-jmrjf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 amd-gpu-device-plugin-86df5                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 coredns-668d6bf9bc-28dsp                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m35s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 csi-hostpathplugin-n7jq8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 etcd-ubuntu-20-agent                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m42s
	  kube-system                 kube-apiserver-ubuntu-20-agent             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 kube-controller-manager-ubuntu-20-agent    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 kube-proxy-4ktb9                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-scheduler-ubuntu-20-agent             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 metrics-server-7fbb699795-kfmft            100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m34s
	  kube-system                 nvidia-device-plugin-daemonset-qtjqk       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 registry-6c88467877-kwnrb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 registry-proxy-gpv45                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 snapshot-controller-68b874b76f-7465t       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 snapshot-controller-68b874b76f-bnf6p       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  volcano-system              volcano-admission-75d8f6b5c-pldpl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  volcano-system              volcano-controllers-86bdc5c9c-7srdg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  volcano-system              volcano-scheduler-75fdd99bcf-kkrdq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  yakd-dashboard              yakd-dashboard-575dd5996b-qf5qb            0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     7m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m33s                  kube-proxy       
	  Normal   Starting                 7m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  7m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    7m46s (x8 over 7m46s)  kubelet          Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m46s (x7 over 7m46s)  kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  7m46s (x8 over 7m46s)  kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientMemory
	  Normal   Starting                 7m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m40s                  kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m40s                  kubelet          Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m40s                  kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m37s                  node-controller  Node ubuntu-20-agent event: Registered Node ubuntu-20-agent in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 83 90 10 44 0e 08 06
	[  +9.877557] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 9f 53 98 65 e0 08 06
	[  +0.046422] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 6c 53 68 81 1f 08 06
	[  +0.061060] IPv4: martian source 10.244.0.1 from 10.244.0.11, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0a 86 62 be 76 08 06
	[  +3.198561] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 81 f4 b0 2d e3 08 06
	[Apr 7 12:47] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 41 17 ce 62 b6 08 06
	[  +0.558988] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 48 74 4f d6 2f 08 06
	[  +0.109195] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 a6 01 38 b3 2f 08 06
	[ +23.480927] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 4e a2 ba 28 37 08 06
	[  +5.548580] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 6e 70 68 84 64 08 06
	[  +0.026445] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 8a 42 e0 9b 75 08 06
	[ +19.909024] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 36 06 3b 6a b8 08 06
	[  +0.000577] IPv4: martian source 10.244.0.24 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 07 5c 69 9a cd 08 06
	
	
	==> etcd [e23f65eeb6af] <==
	{"level":"info","ts":"2025-04-07T12:46:10.809584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-07T12:46:10.809634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-07T12:46:10.809666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgPreVoteResp from d3d995060bc0a086 at term 1"}
	{"level":"info","ts":"2025-04-07T12:46:10.809682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became candidate at term 2"}
	{"level":"info","ts":"2025-04-07T12:46:10.809692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgVoteResp from d3d995060bc0a086 at term 2"}
	{"level":"info","ts":"2025-04-07T12:46:10.809700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became leader at term 2"}
	{"level":"info","ts":"2025-04-07T12:46:10.809709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3d995060bc0a086 elected leader d3d995060bc0a086 at term 2"}
	{"level":"info","ts":"2025-04-07T12:46:10.810586Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"d3d995060bc0a086","local-member-attributes":"{Name:ubuntu-20-agent ClientURLs:[https://10.132.0.4:2379]}","request-path":"/0/members/d3d995060bc0a086/attributes","cluster-id":"36fd114adae62b7a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:46:10.810757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:46:10.810736Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.810857Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:46:10.810931Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:46:10.810645Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:46:10.811710Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:46:10.811768Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:46:10.811988Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"36fd114adae62b7a","local-member-id":"d3d995060bc0a086","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.812087Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.812120Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.812616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.132.0.4:2379"}
	{"level":"info","ts":"2025-04-07T12:46:10.812671Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T12:46:27.716881Z","caller":"traceutil/trace.go:171","msg":"trace[1770517557] linearizableReadLoop","detail":"{readStateIndex:875; appliedIndex:873; }","duration":"121.221478ms","start":"2025-04-07T12:46:27.595638Z","end":"2025-04-07T12:46:27.716859Z","steps":["trace[1770517557] 'read index received'  (duration: 58.788992ms)","trace[1770517557] 'applied index is now lower than readState.Index'  (duration: 62.431839ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T12:46:27.717075Z","caller":"traceutil/trace.go:171","msg":"trace[1614856425] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"123.047449ms","start":"2025-04-07T12:46:27.594011Z","end":"2025-04-07T12:46:27.717058Z","steps":["trace[1614856425] 'process raft request'  (duration: 60.314585ms)","trace[1614856425] 'compare'  (duration: 62.231306ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:46:27.717164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.503421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" limit:1 ","response":"range_response_count:1 size:716"}
	{"level":"info","ts":"2025-04-07T12:46:27.717216Z","caller":"traceutil/trace.go:171","msg":"trace[1260179640] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:856; }","duration":"121.596627ms","start":"2025-04-07T12:46:27.595610Z","end":"2025-04-07T12:46:27.717207Z","steps":["trace[1260179640] 'agreement among raft nodes before linearized reading'  (duration: 121.409977ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:46:27.717377Z","caller":"traceutil/trace.go:171","msg":"trace[2146819799] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"123.358599ms","start":"2025-04-07T12:46:27.594010Z","end":"2025-04-07T12:46:27.717368Z","steps":["trace[2146819799] 'process raft request'  (duration: 122.792111ms)"],"step_count":1}
	
	
	==> gcp-auth [43612f6a057c] <==
	2025/04/07 12:47:53 GCP Auth Webhook started!
	
	
	==> kernel <==
	 12:53:55 up  4:36,  0 users,  load average: 0.43, 0.81, 1.47
	Linux ubuntu-20-agent 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [1b21328ae243] <==
	E0407 12:46:45.376383       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0407 12:46:45.376403       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.94.227:443: connect: connection refused" logger="UnhandledError"
	E0407 12:46:45.377915       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.94.227:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.94.227:443: connect: connection refused" logger="UnhandledError"
	I0407 12:46:45.410925       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0407 12:46:48.502032       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:46:48.502074       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:46:48.504222       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
	W0407 12:46:58.973672       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:46:58.973729       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:46:58.975524       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
	W0407 12:46:58.987339       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:46:58.987397       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:46:58.989094       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
	W0407 12:47:08.989853       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:47:08.989904       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:47:08.992543       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.232.48:443: connect: connection refused
	W0407 12:47:30.984716       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:47:30.984768       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:47:30.996198       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:47:30.996239       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:47:49.957919       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:47:49.957967       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [953db0d2f82d] <==
	I0407 12:47:49.988564       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="14.740381ms"
	I0407 12:47:49.988694       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="80.775µs"
	I0407 12:47:49.996117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="66.867µs"
	I0407 12:47:51.978096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="75.82µs"
	I0407 12:47:53.562541       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="6.899009ms"
	I0407 12:47:53.562657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-cd9db85c" duration="64.49µs"
	I0407 12:47:58.981977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="71.07µs"
	I0407 12:48:02.979460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="85.022µs"
	I0407 12:48:05.044424       1 job_controller.go:598] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0407 12:48:06.029164       1 job_controller.go:598] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0407 12:48:11.980313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="83.377µs"
	I0407 12:48:17.455623       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent"
	I0407 12:48:30.980787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="67.784µs"
	I0407 12:48:36.979786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="78.169µs"
	I0407 12:48:42.980616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="71.78µs"
	I0407 12:48:50.981161       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="84.638µs"
	I0407 12:49:53.978345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="77.788µs"
	I0407 12:50:05.979261       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="70.424µs"
	I0407 12:50:06.980686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="91.043µs"
	I0407 12:50:19.979991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="71.314µs"
	I0407 12:51:50.892283       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent"
	I0407 12:52:44.980227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="73.593µs"
	I0407 12:52:55.978449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="73.473µs"
	I0407 12:52:56.980203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-86bdc5c9c" duration="78.283µs"
	I0407 12:53:08.980954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-75fdd99bcf" duration="267.32µs"
	
	
	==> kube-proxy [e6de974948a2] <==
	I0407 12:46:21.832215       1 server_linux.go:66] "Using iptables proxy"
	I0407 12:46:22.002444       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["10.132.0.4"]
	E0407 12:46:22.002521       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:46:22.084578       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0407 12:46:22.084642       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:46:22.090930       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:46:22.091456       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:46:22.091487       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:46:22.104770       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:46:22.104822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:46:22.104856       1 config.go:199] "Starting service config controller"
	I0407 12:46:22.104861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:46:22.105247       1 config.go:329] "Starting node config controller"
	I0407 12:46:22.105262       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:46:22.207396       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:46:22.207478       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:46:22.211383       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e962b9f0917] <==
	W0407 12:46:12.396702       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:46:12.396720       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.243091       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 12:46:13.243142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.305117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 12:46:13.305161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.312894       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:46:13.312941       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.314239       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.314279       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.357817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:46:13.357865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.450908       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.450956       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.517730       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:46:13.517783       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 12:46:13.524289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.524338       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.554955       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:46:13.554999       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.556960       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.556999       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.658851       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:46:13.658899       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0407 12:46:15.991211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2025-02-07 00:17:37 UTC, end at Mon 2025-04-07 12:53:55 UTC. --
	Apr 07 12:51:52 ubuntu-20-agent kubelet[1430849]: E0407 12:51:52.969124 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:51:59 ubuntu-20-agent kubelet[1430849]: E0407 12:51:59.969293 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:52:05 ubuntu-20-agent kubelet[1430849]: E0407 12:52:05.968601 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:52:14 ubuntu-20-agent kubelet[1430849]: E0407 12:52:14.969591 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:52:16 ubuntu-20-agent kubelet[1430849]: E0407 12:52:16.969385 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:52:26 ubuntu-20-agent kubelet[1430849]: E0407 12:52:26.969063 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.047650 1430849 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
	Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.047726 1430849 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3"
	Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.047876 1430849 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:volcano-controllers,Image:docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3,Command:[],Args:[--logtostderr --enable-healthz=true --enable-metrics=true --leader-elect=false --kube-api-qps=50 --kube-api-burst=100 --worker-threads=3 --worker-threads-for-gc=5 --worker-threads-for-podgroup=5 -v=4 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp8x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,Security
Context:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-controllers-86bdc5c9c-7srdg_volcano-system(cd2b3c58-47c5-46f8-ba36-579e70ff12c3): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 12:52:33 ubuntu-20-agent kubelet[1430849]: E0407 12:52:33.049056 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.043523 1430849 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
	Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.043586 1430849 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86"
	Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.043701 1430849 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false --kube-api-qps=2000 --kube-api-burst=2000 --schedule-period=1s --node-worker-threads=20 -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubP
ath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbqtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-75fdd99bcf-kkrdq_volcano-system(eca17150-2673-4431-a0cc-079a7c574525): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 12:52:43 ubuntu-20-agent kubelet[1430849]: E0407 12:52:43.044955 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:52:44 ubuntu-20-agent kubelet[1430849]: E0407 12:52:44.969445 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:52:55 ubuntu-20-agent kubelet[1430849]: E0407 12:52:55.968467 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:52:56 ubuntu-20-agent kubelet[1430849]: E0407 12:52:56.968305 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:53:08 ubuntu-20-agent kubelet[1430849]: E0407 12:53:08.969002 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:53:11 ubuntu-20-agent kubelet[1430849]: E0407 12:53:11.968602 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:53:19 ubuntu-20-agent kubelet[1430849]: E0407 12:53:19.968727 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:53:24 ubuntu-20-agent kubelet[1430849]: E0407 12:53:24.970354 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:53:31 ubuntu-20-agent kubelet[1430849]: E0407 12:53:31.969238 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:53:38 ubuntu-20-agent kubelet[1430849]: E0407 12:53:38.978696 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	Apr 07 12:53:44 ubuntu-20-agent kubelet[1430849]: E0407 12:53:44.969172 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.11.0@sha256:5cfdfe4343ed267002262f1bb056a7b191cead04003016490cade1e14cfdad86\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-75fdd99bcf-kkrdq" podUID="eca17150-2673-4431-a0cc-079a7c574525"
	Apr 07 12:53:51 ubuntu-20-agent kubelet[1430849]: E0407 12:53:51.968448 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.11.0@sha256:4ebe173752c86bd4a81d5514e9ba56f62dac79d081042a9069333f9aae32d8a3\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-86bdc5c9c-7srdg" podUID="cd2b3c58-47c5-46f8-ba36-579e70ff12c3"
	
	
	==> storage-provisioner [9367d6480bcd] <==
	I0407 12:46:22.691767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:46:22.700993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:46:22.701760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:46:22.709645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:46:22.709901       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c!
	I0407 12:46:22.710495       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"088063e6-27ee-4b45-98d2-8cc5af467fa3", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c became leader
	I0407 12:46:22.810972       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq: exit status 1 (63.576092ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "volcano-admission-init-4bqwh" not found
	Error from server (NotFound): pods "volcano-controllers-86bdc5c9c-7srdg" not found
	Error from server (NotFound): pods "volcano-scheduler-75fdd99bcf-kkrdq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-init-4bqwh volcano-controllers-86bdc5c9c-7srdg volcano-scheduler-75fdd99bcf-kkrdq: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.935602828s)
--- FAIL: TestAddons/serial/Volcano (372.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (388.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0407 12:54:56.250703 1425516 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:54:56.253820 1425516 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:54:56.253843 1425516 kapi.go:107] duration metric: took 3.162725ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.170785ms
addons_test.go:491: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2eb63804-3289-4376-94f2-e061287276c0] Pending
helpers_test.go:344: "task-pv-pod" [2eb63804-3289-4376-94f2-e061287276c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:506: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:506: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:506: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-04-07 13:01:16.718017897 +0000 UTC m=+984.500038263
addons_test.go:506: (dbg) Run:  kubectl --context minikube describe po task-pv-pod -n default
addons_test.go:506: (dbg) kubectl --context minikube describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             ubuntu-20-agent/10.132.0.4
Start Time:       Mon, 07 Apr 2025 12:55:16 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfbbd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-zfbbd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to ubuntu-20-agent
Warning  Failed     5m10s (x2 over 5m41s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m45s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m43s (x3 over 5m58s)  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m43s (x5 over 5m58s)  kubelet            Error: ErrImagePull
Warning  Failed     46s (x20 over 5m57s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    33s (x21 over 5m57s)   kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:506: (dbg) Run:  kubectl --context minikube logs task-pv-pod -n default
addons_test.go:506: (dbg) Non-zero exit: kubectl --context minikube logs task-pv-pod -n default: exit status 1 (75.323993ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:506: kubectl --context minikube logs task-pv-pod -n default: exit status 1
addons_test.go:507: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:44 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| start   | --download-only -p             | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC |                     |
	|         | minikube --alsologtostderr     |          |         |         |                     |                     |
	|         | --binary-mirror                |          |         |         |                     |                     |
	|         | http://127.0.0.1:38191         |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| start   | -p minikube --alsologtostderr  | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	|         | -v=1 --memory=2048             |          |         |         |                     |                     |
	|         | --wait=true --driver=none      |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:46 UTC |
	| addons  | enable dashboard -p minikube   | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC |                     |
	| addons  | disable dashboard -p minikube  | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC |                     |
	| start   | -p minikube --wait=true        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:46 UTC | 07 Apr 25 12:47 UTC |
	|         | --memory=4000                  |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --addons=registry              |          |         |         |                     |                     |
	|         | --addons=metrics-server        |          |         |         |                     |                     |
	|         | --addons=volumesnapshots       |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |          |         |         |                     |                     |
	|         | --addons=gcp-auth              |          |         |         |                     |                     |
	|         | --addons=cloud-spanner         |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin  |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano |          |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| addons  | minikube addons disable        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:53 UTC | 07 Apr 25 12:54 UTC |
	|         | volcano --alsologtostderr -v=1 |          |         |         |                     |                     |
	| addons  | minikube addons disable        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:54 UTC | 07 Apr 25 12:54 UTC |
	|         | gcp-auth --alsologtostderr     |          |         |         |                     |                     |
	|         | -v=1                           |          |         |         |                     |                     |
	| ip      | minikube ip                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:54 UTC | 07 Apr 25 12:54 UTC |
	| addons  | minikube addons disable        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:54 UTC | 07 Apr 25 12:54 UTC |
	|         | registry --alsologtostderr     |          |         |         |                     |                     |
	|         | -v=1                           |          |         |         |                     |                     |
	| addons  | minikube addons disable        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:54 UTC | 07 Apr 25 12:54 UTC |
	|         | inspektor-gadget               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |          |         |         |                     |                     |
	| addons  | minikube addons                | minikube | jenkins | v1.35.0 | 07 Apr 25 12:54 UTC | 07 Apr 25 12:54 UTC |
	|         | disable metrics-server         |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:46:01
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:46:01.231062 1429316 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:46:01.231195 1429316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:46:01.231206 1429316 out.go:358] Setting ErrFile to fd 2...
	I0407 12:46:01.231210 1429316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:46:01.231464 1429316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1418173/.minikube/bin
	I0407 12:46:01.232140 1429316 out.go:352] Setting JSON to false
	I0407 12:46:01.233179 1429316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":16105,"bootTime":1744013856,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:46:01.233311 1429316 start.go:139] virtualization: kvm guest
	I0407 12:46:01.235474 1429316 out.go:177] * minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:46:01.236694 1429316 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:46:01.236729 1429316 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:46:01.236731 1429316 notify.go:220] Checking for updates...
	I0407 12:46:01.239515 1429316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:46:01.240993 1429316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 12:46:01.242159 1429316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	I0407 12:46:01.243419 1429316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:46:01.244910 1429316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:46:01.246416 1429316 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:46:01.257114 1429316 out.go:177] * Using the none driver based on user configuration
	I0407 12:46:01.258434 1429316 start.go:297] selected driver: none
	I0407 12:46:01.258453 1429316 start.go:901] validating driver "none" against <nil>
	I0407 12:46:01.258480 1429316 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:46:01.258516 1429316 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0407 12:46:01.258825 1429316 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0407 12:46:01.259483 1429316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:46:01.259773 1429316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:46:01.259810 1429316 cni.go:84] Creating CNI manager for ""
	I0407 12:46:01.259875 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:46:01.259906 1429316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:46:01.259962 1429316 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:46:01.261473 1429316 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0407 12:46:01.262965 1429316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json ...
	I0407 12:46:01.263009 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json: {Name:mk7435778f484db7c9644d73cb119c70d439299f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:01.263157 1429316 start.go:360] acquireMachinesLock for minikube: {Name:mk53793948be750dfc684af85278e6856b44afc9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 12:46:01.263242 1429316 start.go:364] duration metric: took 28.329µs to acquireMachinesLock for "minikube"
	I0407 12:46:01.263265 1429316 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:46:01.263340 1429316 start.go:125] createHost starting for "" (driver="none")
	I0407 12:46:01.265117 1429316 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0407 12:46:01.267404 1429316 exec_runner.go:51] Run: systemctl --version
	I0407 12:46:01.270063 1429316 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0407 12:46:01.270101 1429316 client.go:168] LocalClient.Create starting
	I0407 12:46:01.270187 1429316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca.pem
	I0407 12:46:01.270218 1429316 main.go:141] libmachine: Decoding PEM data...
	I0407 12:46:01.270234 1429316 main.go:141] libmachine: Parsing certificate...
	I0407 12:46:01.270296 1429316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/cert.pem
	I0407 12:46:01.270319 1429316 main.go:141] libmachine: Decoding PEM data...
	I0407 12:46:01.270329 1429316 main.go:141] libmachine: Parsing certificate...
	I0407 12:46:01.270642 1429316 client.go:171] duration metric: took 532.06µs to LocalClient.Create
	I0407 12:46:01.270666 1429316 start.go:167] duration metric: took 613.883µs to libmachine.API.Create "minikube"
	I0407 12:46:01.270673 1429316 start.go:293] postStartSetup for "minikube" (driver="none")
	I0407 12:46:01.270708 1429316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:46:01.270753 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:46:01.280436 1429316 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 12:46:01.280458 1429316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 12:46:01.280466 1429316 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 12:46:01.282450 1429316 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0407 12:46:01.283786 1429316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1418173/.minikube/addons for local assets ...
	I0407 12:46:01.283847 1429316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1418173/.minikube/files for local assets ...
	I0407 12:46:01.283872 1429316 start.go:296] duration metric: took 13.189796ms for postStartSetup
	I0407 12:46:01.284520 1429316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json ...
	I0407 12:46:01.284674 1429316 start.go:128] duration metric: took 21.323169ms to createHost
	I0407 12:46:01.284690 1429316 start.go:83] releasing machines lock for "minikube", held for 21.433196ms
	I0407 12:46:01.285057 1429316 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 12:46:01.285154 1429316 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0407 12:46:01.287094 1429316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 12:46:01.287141 1429316 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:46:01.297196 1429316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 12:46:01.297229 1429316 start.go:495] detecting cgroup driver to use...
	I0407 12:46:01.297261 1429316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:46:01.297368 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:46:01.319217 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 12:46:01.329584 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 12:46:01.338895 1429316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 12:46:01.338957 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 12:46:01.349057 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:46:01.359932 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 12:46:01.375600 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:46:01.386405 1429316 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:46:01.396041 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 12:46:01.406577 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 12:46:01.429519 1429316 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 12:46:01.439514 1429316 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:46:01.448440 1429316 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:46:01.456361 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:01.690650 1429316 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0407 12:46:01.754996 1429316 start.go:495] detecting cgroup driver to use...
	I0407 12:46:01.755055 1429316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:46:01.755169 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:46:01.781838 1429316 exec_runner.go:51] Run: which cri-dockerd
	I0407 12:46:01.782866 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 12:46:01.791549 1429316 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0407 12:46:01.791585 1429316 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0407 12:46:01.791637 1429316 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0407 12:46:01.800329 1429316 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 12:46:01.800548 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1817254808 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0407 12:46:01.809824 1429316 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0407 12:46:02.026255 1429316 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0407 12:46:02.249916 1429316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 12:46:02.250098 1429316 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0407 12:46:02.250116 1429316 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0407 12:46:02.250166 1429316 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0407 12:46:02.259552 1429316 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0407 12:46:02.259746 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1779169503 /etc/docker/daemon.json
	I0407 12:46:02.268933 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:02.501531 1429316 exec_runner.go:51] Run: sudo systemctl restart docker
	I0407 12:46:02.848272 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 12:46:02.861572 1429316 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0407 12:46:02.879408 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:46:02.890750 1429316 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0407 12:46:03.122082 1429316 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0407 12:46:03.361507 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:03.590334 1429316 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0407 12:46:03.605891 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:46:03.618044 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:03.839254 1429316 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0407 12:46:03.911084 1429316 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 12:46:03.911171 1429316 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0407 12:46:03.912678 1429316 start.go:563] Will wait 60s for crictl version
	I0407 12:46:03.912723 1429316 exec_runner.go:51] Run: which crictl
	I0407 12:46:03.913606 1429316 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0407 12:46:03.947511 1429316 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.0.4
	RuntimeApiVersion:  v1
	I0407 12:46:03.947603 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0407 12:46:03.971036 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0407 12:46:03.995613 1429316 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
	I0407 12:46:03.995718 1429316 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0407 12:46:03.998437 1429316 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0407 12:46:03.999593 1429316 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:46:03.999705 1429316 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:46:03.999717 1429316 kubeadm.go:934] updating node { 10.132.0.4 8443 v1.32.2 docker true true} ...
	I0407 12:46:03.999847 1429316 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.132.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0407 12:46:03.999895 1429316 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0407 12:46:04.048035 1429316 cni.go:84] Creating CNI manager for ""
	I0407 12:46:04.048071 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:46:04.048086 1429316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:46:04.048111 1429316 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.132.0.4 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.132.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.132.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:46:04.048253 1429316 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.132.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "10.132.0.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.132.0.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:46:04.048321 1429316 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:46:04.057083 1429316 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0407 12:46:04.057170 1429316 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0407 12:46:04.065629 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0407 12:46:04.065684 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:46:04.065685 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0407 12:46:04.065755 1429316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0407 12:46:04.065764 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0407 12:46:04.065802 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0407 12:46:04.077514 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0407 12:46:04.124513 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3196352494 /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 12:46:04.130593 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4160672640 /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 12:46:04.149975 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2575322256 /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 12:46:04.230292 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 12:46:04.239456 1429316 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0407 12:46:04.239485 1429316 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0407 12:46:04.239525 1429316 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0407 12:46:04.247941 1429316 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0407 12:46:04.248129 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2812554544 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0407 12:46:04.256651 1429316 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0407 12:46:04.256679 1429316 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0407 12:46:04.256714 1429316 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0407 12:46:04.264872 1429316 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:46:04.265044 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2861707748 /lib/systemd/system/kubelet.service
	I0407 12:46:04.273635 1429316 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0407 12:46:04.273784 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3026017713 /var/tmp/minikube/kubeadm.yaml.new
	I0407 12:46:04.282029 1429316 exec_runner.go:51] Run: grep 10.132.0.4	control-plane.minikube.internal$ /etc/hosts
	I0407 12:46:04.283624 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:04.517665 1429316 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0407 12:46:04.532121 1429316 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube for IP: 10.132.0.4
	I0407 12:46:04.532154 1429316 certs.go:194] generating shared ca certs ...
	I0407 12:46:04.532182 1429316 certs.go:226] acquiring lock for ca certs: {Name:mke037ea5f6110cd4db349ee47a4532de031e41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.532401 1429316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.key
	I0407 12:46:04.532475 1429316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.key
	I0407 12:46:04.532490 1429316 certs.go:256] generating profile certs ...
	I0407 12:46:04.532571 1429316 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key
	I0407 12:46:04.532593 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt with IP's: []
	I0407 12:46:04.746361 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt ...
	I0407 12:46:04.746398 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.crt: {Name:mkd685522b407e574e9a17242256ea962f13d180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.746567 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key ...
	I0407 12:46:04.746584 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/client.key: {Name:mk93b3de66b65705ca976ab8fb0e07c53d19cd38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.746673 1429316 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f
	I0407 12:46:04.746690 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.132.0.4]
	I0407 12:46:04.946265 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f ...
	I0407 12:46:04.946301 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f: {Name:mkc92f9f9b71902112ff236a3fce9245b28fbc4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.946465 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f ...
	I0407 12:46:04.946486 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f: {Name:mk8e0d10049da8458969638f3be970030e3a7c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:04.946565 1429316 certs.go:381] copying /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt.b039158f -> /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt
	I0407 12:46:04.946677 1429316 certs.go:385] copying /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key.b039158f -> /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key
	I0407 12:46:04.946745 1429316 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key
	I0407 12:46:04.946768 1429316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0407 12:46:05.422333 1429316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt ...
	I0407 12:46:05.422367 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt: {Name:mk657c14bd9f3b8cdc778a995b4cc49084dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:05.422505 1429316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key ...
	I0407 12:46:05.422521 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key: {Name:mk04945c273de7864e5113cfa901b08a2b911d34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:05.422716 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 12:46:05.422763 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/ca.pem (1082 bytes)
	I0407 12:46:05.422791 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/cert.pem (1123 bytes)
	I0407 12:46:05.422814 1429316 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1418173/.minikube/certs/key.pem (1675 bytes)
	I0407 12:46:05.423465 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:46:05.423590 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3061948531 /var/lib/minikube/certs/ca.crt
	I0407 12:46:05.432860 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0407 12:46:05.433025 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3068014908 /var/lib/minikube/certs/ca.key
	I0407 12:46:05.443022 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:46:05.443199 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3928182364 /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 12:46:05.453837 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 12:46:05.453966 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2350601260 /var/lib/minikube/certs/proxy-client-ca.key
	I0407 12:46:05.463595 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0407 12:46:05.463752 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2667253322 /var/lib/minikube/certs/apiserver.crt
	I0407 12:46:05.473238 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 12:46:05.473362 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2852199116 /var/lib/minikube/certs/apiserver.key
	I0407 12:46:05.482563 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:46:05.482740 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3185536542 /var/lib/minikube/certs/proxy-client.crt
	I0407 12:46:05.491833 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 12:46:05.491981 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000808512 /var/lib/minikube/certs/proxy-client.key
	I0407 12:46:05.500441 1429316 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0407 12:46:05.500465 1429316 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.500497 1429316 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.508314 1429316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1418173/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:46:05.508471 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube385856075 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.517215 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:46:05.517362 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2963500488 /var/lib/minikube/kubeconfig
	I0407 12:46:05.526041 1429316 exec_runner.go:51] Run: openssl version
	I0407 12:46:05.528924 1429316 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:46:05.537534 1429316 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.538811 1429316 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Apr  7 12:46 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.538859 1429316 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:46:05.541631 1429316 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:46:05.552781 1429316 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:46:05.553844 1429316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:46:05.553891 1429316 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:46:05.553998 1429316 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 12:46:05.570270 1429316 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:46:05.579733 1429316 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:46:05.595767 1429316 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0407 12:46:05.617394 1429316 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:46:05.627797 1429316 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:46:05.627825 1429316 kubeadm.go:157] found existing configuration files:
	
	I0407 12:46:05.627872 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:46:05.636647 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:46:05.636704 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:46:05.644490 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:46:05.653066 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:46:05.653120 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:46:05.660877 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:46:05.670067 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:46:05.670133 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:46:05.678615 1429316 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:46:05.689345 1429316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:46:05.689418 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:46:05.697526 1429316 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 12:46:05.733301 1429316 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:46:05.733366 1429316 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:46:05.761513 1429316 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0407 12:46:05.827926 1429316 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:46:05.827987 1429316 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:46:05.827995 1429316 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:46:05.828001 1429316 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:46:05.838908 1429316 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:46:05.842792 1429316 out.go:235]   - Generating certificates and keys ...
	I0407 12:46:05.842849 1429316 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:46:05.842866 1429316 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:46:05.929822 1429316 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:46:06.034156 1429316 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:46:06.137512 1429316 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:46:06.399738 1429316 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:46:06.658454 1429316 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:46:06.658837 1429316 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
	I0407 12:46:06.793515 1429316 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:46:06.793616 1429316 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
	I0407 12:46:07.111754 1429316 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:46:07.239104 1429316 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:46:07.374867 1429316 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:46:07.375054 1429316 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:46:07.516836 1429316 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:46:07.676713 1429316 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:46:08.039272 1429316 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:46:08.150766 1429316 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:46:08.340603 1429316 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:46:08.341788 1429316 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:46:08.344254 1429316 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:46:08.346695 1429316 out.go:235]   - Booting up control plane ...
	I0407 12:46:08.346729 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:46:08.346756 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:46:08.347211 1429316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:46:08.372882 1429316 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:46:08.377541 1429316 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:46:08.377576 1429316 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:46:08.617762 1429316 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:46:08.617787 1429316 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:46:09.119698 1429316 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.893396ms
	I0407 12:46:09.119727 1429316 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:46:14.121618 1429316 kubeadm.go:310] [api-check] The API server is healthy after 5.001918177s
	I0407 12:46:14.134209 1429316 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:46:14.145166 1429316 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:46:14.166074 1429316 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:46:14.166105 1429316 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:46:14.173597 1429316 kubeadm.go:310] [bootstrap-token] Using token: p4kop0.df2qjc17ds7iaiam
	I0407 12:46:14.175343 1429316 out.go:235]   - Configuring RBAC rules ...
	I0407 12:46:14.175389 1429316 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:46:14.178620 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:46:14.184157 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:46:14.186768 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:46:14.189495 1429316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:46:14.193735 1429316 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:46:14.528888 1429316 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:46:14.951790 1429316 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:46:15.528465 1429316 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:46:15.529302 1429316 kubeadm.go:310] 
	I0407 12:46:15.529328 1429316 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:46:15.529333 1429316 kubeadm.go:310] 
	I0407 12:46:15.529338 1429316 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:46:15.529342 1429316 kubeadm.go:310] 
	I0407 12:46:15.529346 1429316 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:46:15.529350 1429316 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:46:15.529376 1429316 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:46:15.529385 1429316 kubeadm.go:310] 
	I0407 12:46:15.529390 1429316 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:46:15.529394 1429316 kubeadm.go:310] 
	I0407 12:46:15.529398 1429316 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:46:15.529402 1429316 kubeadm.go:310] 
	I0407 12:46:15.529406 1429316 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:46:15.529410 1429316 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:46:15.529415 1429316 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:46:15.529422 1429316 kubeadm.go:310] 
	I0407 12:46:15.529428 1429316 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:46:15.529432 1429316 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:46:15.529434 1429316 kubeadm.go:310] 
	I0407 12:46:15.529439 1429316 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p4kop0.df2qjc17ds7iaiam \
	I0407 12:46:15.529443 1429316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a0218baebfbd26086bf2c1fda945fcf4b4d1b776503555f789838ba1e80aed9c \
	I0407 12:46:15.529446 1429316 kubeadm.go:310] 	--control-plane 
	I0407 12:46:15.529448 1429316 kubeadm.go:310] 
	I0407 12:46:15.529451 1429316 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:46:15.529454 1429316 kubeadm.go:310] 
	I0407 12:46:15.529456 1429316 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p4kop0.df2qjc17ds7iaiam \
	I0407 12:46:15.529459 1429316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a0218baebfbd26086bf2c1fda945fcf4b4d1b776503555f789838ba1e80aed9c 
	I0407 12:46:15.532573 1429316 cni.go:84] Creating CNI manager for ""
	I0407 12:46:15.532610 1429316 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:46:15.534535 1429316 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 12:46:15.535691 1429316 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0407 12:46:15.547497 1429316 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 12:46:15.547645 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3694858315 /etc/cni/net.d/1-k8s.conflist
	I0407 12:46:15.557811 1429316 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 12:46:15.557870 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:15.557891 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent minikube.k8s.io/updated_at=2025_04_07T12_46_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0407 12:46:15.566997 1429316 ops.go:34] apiserver oom_adj: -16
	I0407 12:46:15.628992 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:16.129805 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:16.629609 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:17.129288 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:17.629737 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:18.129916 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:18.629214 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:19.129880 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:19.629695 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:20.129764 1429316 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:46:20.197626 1429316 kubeadm.go:1113] duration metric: took 4.639807769s to wait for elevateKubeSystemPrivileges
	I0407 12:46:20.197664 1429316 kubeadm.go:394] duration metric: took 14.643775896s to StartCluster
	I0407 12:46:20.197703 1429316 settings.go:142] acquiring lock: {Name:mk1a74bdc4efde062e045448da0c418856eac793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:20.197785 1429316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 12:46:20.198485 1429316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/kubeconfig: {Name:mk79daf009e4d10ee19338674231a661a076a223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:46:20.198740 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 12:46:20.198900 1429316 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:true volumesnapshots:true yakd:true]
	I0407 12:46:20.199009 1429316 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:46:20.199034 1429316 addons.go:69] Setting yakd=true in profile "minikube"
	I0407 12:46:20.199052 1429316 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0407 12:46:20.199061 1429316 addons.go:238] Setting addon yakd=true in "minikube"
	I0407 12:46:20.199070 1429316 addons.go:69] Setting amd-gpu-device-plugin=true in profile "minikube"
	I0407 12:46:20.199083 1429316 addons.go:238] Setting addon amd-gpu-device-plugin=true in "minikube"
	I0407 12:46:20.199100 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199106 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199249 1429316 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0407 12:46:20.199278 1429316 addons.go:238] Setting addon cloud-spanner=true in "minikube"
	I0407 12:46:20.199297 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199327 1429316 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0407 12:46:20.199353 1429316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0407 12:46:20.199883 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.199907 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.199922 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.199941 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.199942 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.199982 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.200038 1429316 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0407 12:46:20.200132 1429316 addons.go:238] Setting addon csi-hostpath-driver=true in "minikube"
	I0407 12:46:20.200175 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.200269 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.200284 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.200314 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.200885 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.200911 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.200946 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.201033 1429316 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0407 12:46:20.201064 1429316 mustload.go:65] Loading cluster: minikube
	I0407 12:46:20.201270 1429316 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:46:20.202003 1429316 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0407 12:46:20.202025 1429316 addons.go:238] Setting addon storage-provisioner=true in "minikube"
	I0407 12:46:20.202177 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.202879 1429316 out.go:177] * Configuring local host environment ...
	I0407 12:46:20.203400 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.203417 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.203451 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0407 12:46:20.204888 1429316 out.go:270] * 
	W0407 12:46:20.204905 1429316 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0407 12:46:20.204912 1429316 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0407 12:46:20.204919 1429316 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0407 12:46:20.204925 1429316 out.go:270] * 
	W0407 12:46:20.204969 1429316 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0407 12:46:20.204976 1429316 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0407 12:46:20.204981 1429316 out.go:270] * 
	W0407 12:46:20.205013 1429316 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0407 12:46:20.205020 1429316 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0407 12:46:20.205025 1429316 out.go:270] * 
	W0407 12:46:20.205032 1429316 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0407 12:46:20.205059 1429316 start.go:235] Will wait 6m0s for node &{Name: IP:10.132.0.4 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:46:20.205947 1429316 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0407 12:46:20.205967 1429316 addons.go:238] Setting addon nvidia-device-plugin=true in "minikube"
	I0407 12:46:20.205997 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206023 1429316 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0407 12:46:20.206045 1429316 addons.go:238] Setting addon metrics-server=true in "minikube"
	I0407 12:46:20.206080 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206445 1429316 addons.go:69] Setting registry=true in profile "minikube"
	I0407 12:46:20.206466 1429316 addons.go:238] Setting addon registry=true in "minikube"
	I0407 12:46:20.206547 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206622 1429316 addons.go:69] Setting volcano=true in profile "minikube"
	I0407 12:46:20.206644 1429316 out.go:177] * Verifying Kubernetes components...
	I0407 12:46:20.206669 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206689 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206717 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206727 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.206734 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206780 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.206841 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206865 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206903 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.206918 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.206936 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.207006 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.207280 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.207337 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.206656 1429316 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0407 12:46:20.207373 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.207390 1429316 addons.go:238] Setting addon volumesnapshots=true in "minikube"
	I0407 12:46:20.207430 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.206647 1429316 addons.go:238] Setting addon volcano=true in "minikube"
	I0407 12:46:20.207542 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.199062 1429316 addons.go:238] Setting addon inspektor-gadget=true in "minikube"
	I0407 12:46:20.207852 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.208086 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.208111 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.208142 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.208317 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.208378 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.208278 1429316 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0407 12:46:20.208509 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.211997 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.212040 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.212080 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.222010 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.223069 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.223578 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.224955 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.225989 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.243468 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.261161 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.243475 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.262094 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.262176 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.264542 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.264606 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.264844 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.264909 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.266233 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.266293 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.269905 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.276799 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.276835 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.278142 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.278955 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.279018 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.279925 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.282436 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.284367 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.286078 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0407 12:46:20.287484 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0407 12:46:20.289453 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.289485 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.291042 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0407 12:46:20.292332 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.292410 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.293848 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0407 12:46:20.294863 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.294880 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.294889 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.295689 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.295875 1429316 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0407 12:46:20.296807 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.296874 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.297128 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:46:20.297166 1429316 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0407 12:46:20.297339 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3526351157 /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:46:20.297496 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0407 12:46:20.299046 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0407 12:46:20.300004 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.300028 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.301485 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0407 12:46:20.302071 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.302142 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.303806 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.303862 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.303964 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0407 12:46:20.304170 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.304219 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.304379 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.304394 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.305346 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:46:20.305381 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0407 12:46:20.305539 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3653908400 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:46:20.306131 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.306159 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.309295 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.309372 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.310206 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.312174 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.313237 1429316 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0407 12:46:20.314436 1429316 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0407 12:46:20.319103 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.319175 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.319721 1429316 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 12:46:20.321916 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.321946 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.322334 1429316 out.go:177]   - Using image docker.io/registry:2.8.3
	I0407 12:46:20.322678 1429316 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:46:20.322713 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0407 12:46:20.322868 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.322897 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.323017 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube252749164 /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:46:20.324672 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:46:20.324696 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:46:20.324702 1429316 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0407 12:46:20.324712 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0407 12:46:20.324836 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1922338861 /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:46:20.324992 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2274747492 /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:46:20.326202 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.326256 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.326328 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.326347 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.327053 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.327340 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.327365 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.327998 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.329088 1429316 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
	I0407 12:46:20.330035 1429316 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0407 12:46:20.332248 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.332465 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.334867 1429316 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0407 12:46:20.334922 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0407 12:46:20.335101 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1711987977 /etc/kubernetes/addons/deployment.yaml
	I0407 12:46:20.336209 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.336234 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.336268 1429316 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0407 12:46:20.336319 1429316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 12:46:20.336931 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.336954 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.340717 1429316 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:46:20.340791 1429316 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0407 12:46:20.340948 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1569781149 /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:46:20.340978 1429316 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.341009 1429316 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0407 12:46:20.341016 1429316 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.341047 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.340768 1429316 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
	I0407 12:46:20.345492 1429316 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
	I0407 12:46:20.345582 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.345760 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:46:20.345786 1429316 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0407 12:46:20.345907 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2676008935 /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:46:20.346908 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:46:20.350669 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.350951 1429316 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:46:20.350997 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
	I0407 12:46:20.352791 1429316 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0407 12:46:20.356470 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:46:20.356511 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0407 12:46:20.357258 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3825462376 /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:46:20.357984 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1913311062 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:46:20.358967 1429316 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0407 12:46:20.359621 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:46:20.359658 1429316 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:46:20.359664 1429316 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0407 12:46:20.359691 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0407 12:46:20.359845 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2952795493 /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:46:20.360495 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube189832616 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:46:20.361524 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:46:20.361558 1429316 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 12:46:20.365172 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4120191610 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:46:20.374944 1429316 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:46:20.374992 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0407 12:46:20.375186 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2179279931 /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:46:20.379041 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.379374 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.380385 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 12:46:20.380560 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3051500119 /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.385196 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.387870 1429316 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0407 12:46:20.388702 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0407 12:46:20.390037 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:46:20.390067 1429316 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0407 12:46:20.390187 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube31446616 /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:46:20.390764 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:46:20.390800 1429316 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0407 12:46:20.391569 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3779181310 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:46:20.394337 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:46:20.398769 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:46:20.398806 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0407 12:46:20.398933 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube906689499 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:46:20.402373 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:46:20.402640 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.402664 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.405039 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:46:20.408207 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.409282 1429316 addons.go:238] Setting addon default-storageclass=true in "minikube"
	I0407 12:46:20.409335 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:20.410204 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:46:20.411381 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:20.411413 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:20.411457 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:20.416651 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:46:20.416753 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0407 12:46:20.416972 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2826717481 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:46:20.419552 1429316 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:46:20.419587 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0407 12:46:20.419724 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3788447769 /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:46:20.421654 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:46:20.421683 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0407 12:46:20.422435 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1998580135 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:46:20.425248 1429316 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:46:20.425278 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0407 12:46:20.425416 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3456691057 /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:46:20.470027 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:46:20.471917 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:46:20.471958 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0407 12:46:20.472122 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2976229442 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:46:20.472656 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:46:20.472682 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0407 12:46:20.472807 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube422639263 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:46:20.474651 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:46:20.497912 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:46:20.497967 1429316 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 12:46:20.498143 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3106965212 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:46:20.514273 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:20.536535 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:46:20.536573 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0407 12:46:20.536697 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3748851246 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:46:20.558030 1429316 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:46:20.558071 1429316 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0407 12:46:20.558226 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3460275038 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:46:20.583644 1429316 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:46:20.583701 1429316 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 12:46:20.583856 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4292587570 /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:46:20.602264 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:46:20.613494 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:46:20.613554 1429316 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0407 12:46:20.613690 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube210220550 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:46:20.697780 1429316 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0407 12:46:20.710202 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:20.710292 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:20.726957 1429316 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:46:20.727004 1429316 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0407 12:46:20.727156 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1419895819 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:46:20.758069 1429316 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent" to be "Ready" ...
	I0407 12:46:20.760314 1429316 node_ready.go:49] node "ubuntu-20-agent" has status "Ready":"True"
	I0407 12:46:20.760337 1429316 node_ready.go:38] duration metric: took 2.226937ms for node "ubuntu-20-agent" to be "Ready" ...
	I0407 12:46:20.760348 1429316 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:46:20.776617 1429316 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:20.776664 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0407 12:46:20.779959 1429316 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:20.786355 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3871166456 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:20.823708 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:46:20.823745 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0407 12:46:20.823889 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3128803471 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:46:20.824070 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:20.824088 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:20.831089 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:20.831141 1429316 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.831160 1429316 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0407 12:46:20.831168 1429316 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.831207 1429316 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.856878 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:46:20.856920 1429316 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0407 12:46:20.859857 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube168228944 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:46:20.883053 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:20.886655 1429316 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 12:46:20.886842 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2746251177 /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.916503 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:46:20.916548 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0407 12:46:20.916700 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4173374182 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:46:20.925691 1429316 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0407 12:46:20.958076 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:46:20.987499 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:46:20.987568 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0407 12:46:20.987741 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2420038711 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:46:21.040807 1429316 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:46:21.040860 1429316 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0407 12:46:21.041041 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3268416938 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:46:21.136865 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:46:21.409763 1429316 addons.go:479] Verifying addon registry=true in "minikube"
	I0407 12:46:21.412264 1429316 out.go:177] * Verifying registry addon...
	I0407 12:46:21.415321 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0407 12:46:21.418713 1429316 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0407 12:46:21.418736 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:21.433549 1429316 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0407 12:46:21.570206 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.159952141s)
	I0407 12:46:21.648841 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.243753231s)
	I0407 12:46:21.711886 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.24178473s)
	I0407 12:46:21.717694 1429316 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0407 12:46:21.720411 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.118083477s)
	I0407 12:46:21.720456 1429316 addons.go:479] Verifying addon metrics-server=true in "minikube"
	I0407 12:46:21.922074 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:22.419286 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:22.595941 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.712830875s)
	W0407 12:46:22.595992 1429316 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:46:22.596030 1429316 retry.go:31] will retry after 202.751969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:46:22.786098 1429316 pod_ready.go:103] pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace has status "Ready":"False"
	I0407 12:46:22.799303 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:46:22.919554 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:23.425450 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:23.456836 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.319887875s)
	I0407 12:46:23.456881 1429316 addons.go:479] Verifying addon csi-hostpath-driver=true in "minikube"
	I0407 12:46:23.462996 1429316 out.go:177] * Verifying csi-hostpath-driver addon...
	I0407 12:46:23.467517 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0407 12:46:23.500910 1429316 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0407 12:46:23.500946 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:23.678635 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.276218032s)
	I0407 12:46:23.919571 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:23.987515 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:24.419253 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:24.471440 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:24.919038 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:24.971484 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:25.285637 1429316 pod_ready.go:93] pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:25.285663 1429316 pod_ready.go:82] duration metric: took 4.505662003s for pod "amd-gpu-device-plugin-86df5" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:25.285673 1429316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:25.419494 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:25.521115 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:25.533187 1429316 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.733792804s)
	I0407 12:46:25.918839 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:25.971084 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:26.419692 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:26.472363 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:26.919941 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:27.020780 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:27.108165 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0407 12:46:27.108484 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1890754882 /var/lib/minikube/google_application_credentials.json
	I0407 12:46:27.119734 1429316 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0407 12:46:27.119899 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2219183012 /var/lib/minikube/google_cloud_project
	I0407 12:46:27.131325 1429316 addons.go:238] Setting addon gcp-auth=true in "minikube"
	I0407 12:46:27.131402 1429316 host.go:66] Checking if "minikube" exists ...
	I0407 12:46:27.132217 1429316 kubeconfig.go:125] found "minikube" server: "https://10.132.0.4:8443"
	I0407 12:46:27.132247 1429316 api_server.go:166] Checking apiserver status ...
	I0407 12:46:27.132286 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:27.152075 1429316 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1430717/cgroup
	I0407 12:46:27.163123 1429316 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429"
	I0407 12:46:27.163212 1429316 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod659090bce18e3f1dd432f0808cb3d030/1b21328ae243e3dd68fcc91c85e0eca6776102129d556ce59026d98160b4f429/freezer.state
	I0407 12:46:27.172494 1429316 api_server.go:204] freezer state: "THAWED"
	I0407 12:46:27.172531 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:27.177380 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:27.177462 1429316 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0407 12:46:27.180770 1429316 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:46:27.182360 1429316 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0407 12:46:27.183717 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:46:27.183761 1429316 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0407 12:46:27.183920 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1049495724 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:46:27.196439 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:46:27.196488 1429316 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0407 12:46:27.196686 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube693064940 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:46:27.206666 1429316 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:46:27.206702 1429316 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0407 12:46:27.206855 1429316 exec_runner.go:51] Run: sudo cp -a /tmp/minikube58906347 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:46:27.218711 1429316 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:46:27.291476 1429316 pod_ready.go:93] pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:27.291502 1429316 pod_ready.go:82] duration metric: took 2.005821765s for pod "coredns-668d6bf9bc-28dsp" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.291519 1429316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.295922 1429316 pod_ready.go:93] pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:27.295949 1429316 pod_ready.go:82] duration metric: took 4.420137ms for pod "coredns-668d6bf9bc-c67zv" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.295962 1429316 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.299925 1429316 pod_ready.go:93] pod "etcd-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:27.299965 1429316 pod_ready.go:82] duration metric: took 3.992923ms for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.299978 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:27.419975 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:27.471432 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:27.920057 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:27.962665 1429316 addons.go:479] Verifying addon gcp-auth=true in "minikube"
	I0407 12:46:27.965706 1429316 out.go:177] * Verifying gcp-auth addon...
	I0407 12:46:27.968051 1429316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0407 12:46:28.020196 1429316 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:46:28.020499 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:28.420045 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:28.471286 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:28.805902 1429316 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:28.805928 1429316 pod_ready.go:82] duration metric: took 1.505941321s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.805938 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.811208 1429316 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:28.811254 1429316 pod_ready.go:82] duration metric: took 5.307688ms for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.811269 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4ktb9" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.889612 1429316 pod_ready.go:93] pod "kube-proxy-4ktb9" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:28.889639 1429316 pod_ready.go:82] duration metric: took 78.35951ms for pod "kube-proxy-4ktb9" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.889652 1429316 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:28.919192 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:29.020417 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:29.289605 1429316 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:29.289637 1429316 pod_ready.go:82] duration metric: took 399.974892ms for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:29.289653 1429316 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:29.419981 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:29.471030 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:29.918490 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:29.971448 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:30.419178 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:30.471563 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:30.918873 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:31.020301 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:31.296406 1429316 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"False"
	I0407 12:46:31.419473 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:31.471850 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:31.919476 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:31.971849 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:32.419096 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:32.471663 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:32.919835 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:32.971160 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:33.419000 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:33.519607 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:33.794578 1429316 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"False"
	I0407 12:46:33.918521 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:33.989387 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:34.419833 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:34.470704 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:34.919739 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:35.020689 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:35.295351 1429316 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace has status "Ready":"True"
	I0407 12:46:35.295382 1429316 pod_ready.go:82] duration metric: took 6.005719807s for pod "nvidia-device-plugin-daemonset-qtjqk" in "kube-system" namespace to be "Ready" ...
	I0407 12:46:35.295394 1429316 pod_ready.go:39] duration metric: took 14.53503087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:46:35.295421 1429316 api_server.go:52] waiting for apiserver process to appear ...
	I0407 12:46:35.295487 1429316 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:46:35.314133 1429316 api_server.go:72] duration metric: took 15.109039821s to wait for apiserver process to appear ...
	I0407 12:46:35.314163 1429316 api_server.go:88] waiting for apiserver healthz status ...
	I0407 12:46:35.314188 1429316 api_server.go:253] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0407 12:46:35.317933 1429316 api_server.go:279] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0407 12:46:35.318854 1429316 api_server.go:141] control plane version: v1.32.2
	I0407 12:46:35.318881 1429316 api_server.go:131] duration metric: took 4.708338ms to wait for apiserver health ...
	I0407 12:46:35.318889 1429316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 12:46:35.322611 1429316 system_pods.go:59] 17 kube-system pods found
	I0407 12:46:35.322656 1429316 system_pods.go:61] "amd-gpu-device-plugin-86df5" [ba9ab47c-61f0-4711-959e-29c976ef7c89] Running
	I0407 12:46:35.322666 1429316 system_pods.go:61] "coredns-668d6bf9bc-28dsp" [c3edd2f1-75f3-4345-9544-93c2a6f0f5d3] Running
	I0407 12:46:35.322677 1429316 system_pods.go:61] "csi-hostpath-attacher-0" [8f7840f4-1626-4a29-be20-6998152854a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:46:35.322690 1429316 system_pods.go:61] "csi-hostpath-resizer-0" [06f1b8f1-d561-44df-8d0e-e5191281a47f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:46:35.322700 1429316 system_pods.go:61] "csi-hostpathplugin-n7jq8" [7f9c7966-52c5-4bcb-84c7-1915efadd81b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:46:35.322708 1429316 system_pods.go:61] "etcd-ubuntu-20-agent" [13ea58ff-509e-403d-90ae-292ab15ea901] Running
	I0407 12:46:35.322712 1429316 system_pods.go:61] "kube-apiserver-ubuntu-20-agent" [8832ae71-7c9c-4d9e-a74d-d2dc87fcc0a1] Running
	I0407 12:46:35.322718 1429316 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent" [73ba7bcb-e73b-4403-a7d7-9532589d0ab9] Running
	I0407 12:46:35.322723 1429316 system_pods.go:61] "kube-proxy-4ktb9" [f218d86a-31ef-4897-b9e4-d53c0a6eb365] Running
	I0407 12:46:35.322728 1429316 system_pods.go:61] "kube-scheduler-ubuntu-20-agent" [58f3fb78-0ec4-41c5-a20f-9a0df3c2f9ce] Running
	I0407 12:46:35.322741 1429316 system_pods.go:61] "metrics-server-7fbb699795-kfmft" [723d2ed5-e3cb-4cc3-80d7-62e3c337502a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 12:46:35.322746 1429316 system_pods.go:61] "nvidia-device-plugin-daemonset-qtjqk" [861c99d3-8db6-4690-9b9a-9445eb29a1b1] Running
	I0407 12:46:35.322754 1429316 system_pods.go:61] "registry-6c88467877-kwnrb" [4fbcb06c-10f2-48eb-ae63-5c09b49e6099] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:46:35.322762 1429316 system_pods.go:61] "registry-proxy-gpv45" [1ee0f741-4f8b-4063-832c-bfc311b610aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:46:35.322772 1429316 system_pods.go:61] "snapshot-controller-68b874b76f-7465t" [bacd4eea-22af-4b2e-a3c3-c11adcd9d06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.322782 1429316 system_pods.go:61] "snapshot-controller-68b874b76f-bnf6p" [36a09b5c-f06d-41d9-b331-82f98e9152c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.322787 1429316 system_pods.go:61] "storage-provisioner" [18b8d7ec-1526-45c5-8660-6ab5bcb5dde2] Running
	I0407 12:46:35.322795 1429316 system_pods.go:74] duration metric: took 3.900184ms to wait for pod list to return data ...
	I0407 12:46:35.322803 1429316 default_sa.go:34] waiting for default service account to be created ...
	I0407 12:46:35.325143 1429316 default_sa.go:45] found service account: "default"
	I0407 12:46:35.325165 1429316 default_sa.go:55] duration metric: took 2.356952ms for default service account to be created ...
	I0407 12:46:35.325173 1429316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 12:46:35.328166 1429316 system_pods.go:86] 17 kube-system pods found
	I0407 12:46:35.328197 1429316 system_pods.go:89] "amd-gpu-device-plugin-86df5" [ba9ab47c-61f0-4711-959e-29c976ef7c89] Running
	I0407 12:46:35.328204 1429316 system_pods.go:89] "coredns-668d6bf9bc-28dsp" [c3edd2f1-75f3-4345-9544-93c2a6f0f5d3] Running
	I0407 12:46:35.328211 1429316 system_pods.go:89] "csi-hostpath-attacher-0" [8f7840f4-1626-4a29-be20-6998152854a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:46:35.328218 1429316 system_pods.go:89] "csi-hostpath-resizer-0" [06f1b8f1-d561-44df-8d0e-e5191281a47f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:46:35.328232 1429316 system_pods.go:89] "csi-hostpathplugin-n7jq8" [7f9c7966-52c5-4bcb-84c7-1915efadd81b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:46:35.328239 1429316 system_pods.go:89] "etcd-ubuntu-20-agent" [13ea58ff-509e-403d-90ae-292ab15ea901] Running
	I0407 12:46:35.328243 1429316 system_pods.go:89] "kube-apiserver-ubuntu-20-agent" [8832ae71-7c9c-4d9e-a74d-d2dc87fcc0a1] Running
	I0407 12:46:35.328248 1429316 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent" [73ba7bcb-e73b-4403-a7d7-9532589d0ab9] Running
	I0407 12:46:35.328251 1429316 system_pods.go:89] "kube-proxy-4ktb9" [f218d86a-31ef-4897-b9e4-d53c0a6eb365] Running
	I0407 12:46:35.328262 1429316 system_pods.go:89] "kube-scheduler-ubuntu-20-agent" [58f3fb78-0ec4-41c5-a20f-9a0df3c2f9ce] Running
	I0407 12:46:35.328271 1429316 system_pods.go:89] "metrics-server-7fbb699795-kfmft" [723d2ed5-e3cb-4cc3-80d7-62e3c337502a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 12:46:35.328275 1429316 system_pods.go:89] "nvidia-device-plugin-daemonset-qtjqk" [861c99d3-8db6-4690-9b9a-9445eb29a1b1] Running
	I0407 12:46:35.328280 1429316 system_pods.go:89] "registry-6c88467877-kwnrb" [4fbcb06c-10f2-48eb-ae63-5c09b49e6099] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:46:35.328289 1429316 system_pods.go:89] "registry-proxy-gpv45" [1ee0f741-4f8b-4063-832c-bfc311b610aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:46:35.328300 1429316 system_pods.go:89] "snapshot-controller-68b874b76f-7465t" [bacd4eea-22af-4b2e-a3c3-c11adcd9d06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.328315 1429316 system_pods.go:89] "snapshot-controller-68b874b76f-bnf6p" [36a09b5c-f06d-41d9-b331-82f98e9152c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:46:35.328320 1429316 system_pods.go:89] "storage-provisioner" [18b8d7ec-1526-45c5-8660-6ab5bcb5dde2] Running
	I0407 12:46:35.328331 1429316 system_pods.go:126] duration metric: took 3.151221ms to wait for k8s-apps to be running ...
	I0407 12:46:35.328339 1429316 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 12:46:35.328391 1429316 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:46:35.342621 1429316 system_svc.go:56] duration metric: took 14.266686ms WaitForService to wait for kubelet
	I0407 12:46:35.342652 1429316 kubeadm.go:582] duration metric: took 15.137567518s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:46:35.342672 1429316 node_conditions.go:102] verifying NodePressure condition ...
	I0407 12:46:35.345647 1429316 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0407 12:46:35.345689 1429316 node_conditions.go:123] node cpu capacity is 8
	I0407 12:46:35.345708 1429316 node_conditions.go:105] duration metric: took 3.029456ms to run NodePressure ...
	I0407 12:46:35.345725 1429316 start.go:241] waiting for startup goroutines ...
	I0407 12:46:35.418575 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:35.471738 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:35.919460 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:35.971459 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:36.418927 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:36.470944 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:36.920012 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:36.971236 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:37.419625 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:37.471551 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:37.919187 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:37.971281 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:38.419700 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:38.471414 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:38.919826 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:38.971034 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:39.419257 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:39.471577 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:39.919763 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:39.970822 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:40.419580 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:40.471764 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:40.919389 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:40.971543 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:41.418325 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:41.471154 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:41.919369 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:41.971517 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:42.419213 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:42.471390 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:42.919024 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:46:43.020384 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:43.419813 1429316 kapi.go:107] duration metric: took 22.004486403s to wait for kubernetes.io/minikube-addons=registry ...
	I0407 12:46:43.471031 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:43.972893 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:44.472004 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:44.971721 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:45.471738 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:45.972198 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:46.472443 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:46.972278 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:47.483667 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:47.971419 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:48.472169 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:48.976645 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:49.471072 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:49.971622 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:50.471297 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:50.972415 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:51.471308 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:51.972434 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:52.471555 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:52.975728 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:53.471488 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:53.971405 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:54.471915 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:54.972725 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:55.471662 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:56.020761 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:56.471703 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:56.972347 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:57.471091 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:57.972508 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:58.471079 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:58.972451 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:59.471337 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:46:59.972044 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:00.471100 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:00.972307 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:01.472123 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:01.972205 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:02.472657 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:02.972119 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:03.517910 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:03.972052 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:04.472123 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:04.972034 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:05.471642 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:05.971701 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:06.471445 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:47:06.971897 1429316 kapi.go:107] duration metric: took 43.504396595s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0407 12:47:49.972271 1429316 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:47:49.972299 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:50.471070 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:50.971560 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:51.472444 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:51.971704 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:52.472395 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:52.977847 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:53.471523 1429316 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:47:53.972000 1429316 kapi.go:107] duration metric: took 1m26.003943819s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0407 12:47:53.973797 1429316 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0407 12:47:53.975209 1429316 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0407 12:47:53.976604 1429316 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0407 12:47:53.978619 1429316 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, inspektor-gadget, yakd, metrics-server, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0407 12:47:53.980134 1429316 addons.go:514] duration metric: took 1m33.781240974s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass inspektor-gadget yakd metrics-server volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0407 12:47:53.980187 1429316 start.go:246] waiting for cluster config update ...
	I0407 12:47:53.980213 1429316 start.go:255] writing updated cluster config ...
	I0407 12:47:53.980556 1429316 exec_runner.go:51] Run: rm -f paused
	I0407 12:47:54.030053 1429316 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 12:47:54.031911 1429316 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2025-02-07 00:17:37 UTC, end at Mon 2025-04-07 13:01:17 UTC. --
	Apr 07 12:54:34 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bcaee81c2176053cde6f17ee9ccabbc9696140381916dfcc8bf691f7e0139f90/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Apr 07 12:54:36 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:54:36Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:latest: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest"
	Apr 07 12:54:36 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:36.163588190Z" level=info msg="ignoring event" container=6b2a60d865994466b018d78a74f0c0e739dc48e1922c015eeba04f9b170a2dbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:37 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:37.910003497Z" level=info msg="ignoring event" container=bcaee81c2176053cde6f17ee9ccabbc9696140381916dfcc8bf691f7e0139f90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:38 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:38.375935338Z" level=info msg="ignoring event" container=3a2cbb8e4e13124755447d9169868a08306a99dab3bdab7e0770096709a75ec1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:38 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:38.429188795Z" level=info msg="ignoring event" container=ae8585906fcb91670d414e02104e0a8afbdb9d72c15bf9db1b5acbfb3a0ece43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:38 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:38.504299202Z" level=info msg="ignoring event" container=8817996a24643e17bc87f2f3260d387118217b71a1ae686445baf4948df98b18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:38 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:38.571145619Z" level=info msg="ignoring event" container=2d13057c5cdc34dd25f84b83a8b5a38827b8cd83ef45513fe53358e5c34aeb24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:44 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:44.940903927Z" level=info msg="ignoring event" container=fdd971918b4ba358c16310e5d7e1bc41400b2fef63c724445f9a2228c8dd3ef6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:45 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:45.084161062Z" level=info msg="ignoring event" container=87e95256e81892ccdf1673467d653f21bb7cfe4fd5d64f1dfa94af275024ae87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:57 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:57.265053698Z" level=info msg="ignoring event" container=a5658dd8aadd50b84d16aa862ba9a0c94e9b8666e01d105edd70d5952a6938ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:54:57 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:54:57.393269031Z" level=info msg="ignoring event" container=4e9eb194ca166051223c1dac3393ade0e459f01b38db3245fd24e7e18947aeeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:55:16 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:55:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/867a4c1e9311b3b69f3e6b68d8d6e6c251bb233ad331426e8b0d44fab7eca8a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Apr 07 12:55:18 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:55:18.429173127Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:55:18 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:55:18Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Apr 07 12:55:35 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:55:35.046502006Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:55:35 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:55:35.048380684Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:56:06 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:56:06.032559209Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:56:06 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:56:06.034370535Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:57:00 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:57:00.349801717Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:57:00 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:57:00Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Apr 07 12:58:33 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T12:58:33.353922764Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:58:33 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T12:58:33Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Apr 07 13:01:15 ubuntu-20-agent dockerd[1429533]: time="2025-04-07T13:01:15.376125225Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:15 ubuntu-20-agent cri-dockerd[1429899]: time="2025-04-07T13:01:15Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	62488f220bc01       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   64e6d321445b3       busybox
	d8d4df3245c1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          14 minutes ago      Running             csi-snapshotter                          0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	9e774f36f36c9       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          14 minutes ago      Running             csi-provisioner                          0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	14093b9eed3cd       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            14 minutes ago      Running             liveness-probe                           0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	84f7a19f6f36c       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           14 minutes ago      Running             hostpath                                 0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	4fa315740091f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                14 minutes ago      Running             node-driver-registrar                    0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	647294f13c314       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              14 minutes ago      Running             csi-resizer                              0                   050e14ae928f5       csi-hostpath-resizer-0
	ba7ce3888e0c5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   14 minutes ago      Running             csi-external-health-monitor-controller   0                   8742f0500ba41       csi-hostpathplugin-n7jq8
	2b2cd10e8243c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             14 minutes ago      Running             csi-attacher                             0                   ddc86519dee5d       csi-hostpath-attacher-0
	a247be89a521f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      14 minutes ago      Running             volume-snapshot-controller               0                   10f25a02c6c25       snapshot-controller-68b874b76f-7465t
	6320ac7c7873b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      14 minutes ago      Running             volume-snapshot-controller               0                   cee952ace26d6       snapshot-controller-68b874b76f-bnf6p
	3cde9dbb13733       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        14 minutes ago      Running             yakd                                     0                   17c37c1766f9a       yakd-dashboard-575dd5996b-qf5qb
	b7c45376b2746       gcr.io/cloud-spanner-emulator/emulator@sha256:a9c7274e55bba48a4f5bec813a11087d9f2e3a3f7e583dae9873aae2ec17f125                               14 minutes ago      Running             cloud-spanner-emulator                   0                   96406b22e6497       cloud-spanner-emulator-cc9755fc7-8d2gd
	08a692aaf85f6       nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea                                     14 minutes ago      Running             nvidia-device-plugin-ctr                 0                   158136d890242       nvidia-device-plugin-daemonset-qtjqk
	28e171950f5a7       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               14 minutes ago      Running             amd-gpu-device-plugin                    0                   0cccbb3588406       amd-gpu-device-plugin-86df5
	9367d6480bcd3       6e38f40d628db                                                                                                                                14 minutes ago      Running             storage-provisioner                      0                   4e46329d24f22       storage-provisioner
	e6de974948a2b       f1332858868e1                                                                                                                                14 minutes ago      Running             kube-proxy                               0                   7cbe52af79cd0       kube-proxy-4ktb9
	634b0f31bf167       c69fa2e9cbf5f                                                                                                                                14 minutes ago      Running             coredns                                  0                   fb409e8883373       coredns-668d6bf9bc-28dsp
	8e962b9f09173       d8e673e7c9983                                                                                                                                15 minutes ago      Running             kube-scheduler                           0                   0cc01a4584319       kube-scheduler-ubuntu-20-agent
	1b21328ae243e       85b7a174738ba                                                                                                                                15 minutes ago      Running             kube-apiserver                           0                   3da9550e5056a       kube-apiserver-ubuntu-20-agent
	e23f65eeb6aff       a9e7e6b294baf                                                                                                                                15 minutes ago      Running             etcd                                     0                   016f56a70aaee       etcd-ubuntu-20-agent
	953db0d2f82d9       b6a454c5a800d                                                                                                                                15 minutes ago      Running             kube-controller-manager                  0                   149fe9b8110db       kube-controller-manager-ubuntu-20-agent
	
	
	==> coredns [634b0f31bf16] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 876af57068f747144f204884e843f6792435faec005aab1f10bd81e6ffca54e010e4374994d8f544c4f6711272ab5662d0892980e63ccc3ba8ba9e3fbcc5e4d9
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43165 - 33942 "HINFO IN 432949529890596107.8050361272252031817. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021642899s
	[INFO] 10.244.0.24:33042 - 27922 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000389816s
	[INFO] 10.244.0.24:42171 - 38582 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184446s
	[INFO] 10.244.0.24:56803 - 17108 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118517s
	[INFO] 10.244.0.24:36839 - 60695 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170938s
	[INFO] 10.244.0.24:48923 - 36870 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128602s
	[INFO] 10.244.0.24:43224 - 14793 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199417s
	[INFO] 10.244.0.24:40445 - 11974 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00413958s
	[INFO] 10.244.0.24:38595 - 36532 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004195152s
	[INFO] 10.244.0.24:33576 - 36108 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003551961s
	[INFO] 10.244.0.24:44447 - 31922 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004805135s
	[INFO] 10.244.0.24:42741 - 32070 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003188282s
	[INFO] 10.244.0.24:35696 - 46424 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00369519s
	[INFO] 10.244.0.24:40570 - 13844 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002311578s
	[INFO] 10.244.0.24:45311 - 54943 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.002645389s
	[INFO] 10.244.0.26:44694 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000347795s
	[INFO] 10.244.0.26:54038 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000168512s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_46_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:46:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:01:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:00:41 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:00:41 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:00:41 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:00:41 +0000   Mon, 07 Apr 2025 12:46:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.132.0.4
	  Hostname:    ubuntu-20-agent
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                591c9f12-2938-3743-e2bf-c56a050d43d1
	  Boot ID:                    32c262e1-f080-4c3c-9cad-9adf7e4991ef
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  default                     cloud-spanner-emulator-cc9755fc7-8d2gd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     task-pv-pod                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 amd-gpu-device-plugin-86df5                0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-668d6bf9bc-28dsp                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 csi-hostpath-attacher-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-n7jq8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ubuntu-20-agent                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kube-apiserver-ubuntu-20-agent             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ubuntu-20-agent    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-4ktb9                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ubuntu-20-agent             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 nvidia-device-plugin-daemonset-qtjqk       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-68b874b76f-7465t       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-68b874b76f-bnf6p       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-575dd5996b-qf5qb            0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             298Mi (0%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientMemory
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node ubuntu-20-agent event: Registered Node ubuntu-20-agent in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 0a 86 62 be 76 08 06
	[  +3.198561] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 81 f4 b0 2d e3 08 06
	[Apr 7 12:47] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 41 17 ce 62 b6 08 06
	[  +0.558988] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 48 74 4f d6 2f 08 06
	[  +0.109195] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 22 a6 01 38 b3 2f 08 06
	[ +23.480927] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 4e a2 ba 28 37 08 06
	[  +5.548580] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 6e 70 68 84 64 08 06
	[  +0.026445] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 8a 42 e0 9b 75 08 06
	[ +19.909024] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 36 06 3b 6a b8 08 06
	[  +0.000577] IPv4: martian source 10.244.0.24 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 07 5c 69 9a cd 08 06
	[Apr 7 12:54] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e bf f2 32 b0 57 08 06
	[  +0.000549] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 07 5c 69 9a cd 08 06
	[  +0.000643] IPv4: martian source 10.244.0.26 from 10.244.0.8, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 83 90 10 44 0e 08 06
	
	
	==> etcd [e23f65eeb6af] <==
	{"level":"info","ts":"2025-04-07T12:46:10.809709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3d995060bc0a086 elected leader d3d995060bc0a086 at term 2"}
	{"level":"info","ts":"2025-04-07T12:46:10.810586Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"d3d995060bc0a086","local-member-attributes":"{Name:ubuntu-20-agent ClientURLs:[https://10.132.0.4:2379]}","request-path":"/0/members/d3d995060bc0a086/attributes","cluster-id":"36fd114adae62b7a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:46:10.810757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:46:10.810736Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.810857Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:46:10.810931Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:46:10.810645Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:46:10.811710Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:46:10.811768Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:46:10.811988Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"36fd114adae62b7a","local-member-id":"d3d995060bc0a086","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.812087Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.812120Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:46:10.812616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.132.0.4:2379"}
	{"level":"info","ts":"2025-04-07T12:46:10.812671Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T12:46:27.716881Z","caller":"traceutil/trace.go:171","msg":"trace[1770517557] linearizableReadLoop","detail":"{readStateIndex:875; appliedIndex:873; }","duration":"121.221478ms","start":"2025-04-07T12:46:27.595638Z","end":"2025-04-07T12:46:27.716859Z","steps":["trace[1770517557] 'read index received'  (duration: 58.788992ms)","trace[1770517557] 'applied index is now lower than readState.Index'  (duration: 62.431839ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T12:46:27.717075Z","caller":"traceutil/trace.go:171","msg":"trace[1614856425] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"123.047449ms","start":"2025-04-07T12:46:27.594011Z","end":"2025-04-07T12:46:27.717058Z","steps":["trace[1614856425] 'process raft request'  (duration: 60.314585ms)","trace[1614856425] 'compare'  (duration: 62.231306ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:46:27.717164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.503421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" limit:1 ","response":"range_response_count:1 size:716"}
	{"level":"info","ts":"2025-04-07T12:46:27.717216Z","caller":"traceutil/trace.go:171","msg":"trace[1260179640] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:856; }","duration":"121.596627ms","start":"2025-04-07T12:46:27.595610Z","end":"2025-04-07T12:46:27.717207Z","steps":["trace[1260179640] 'agreement among raft nodes before linearized reading'  (duration: 121.409977ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:46:27.717377Z","caller":"traceutil/trace.go:171","msg":"trace[2146819799] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"123.358599ms","start":"2025-04-07T12:46:27.594010Z","end":"2025-04-07T12:46:27.717368Z","steps":["trace[2146819799] 'process raft request'  (duration: 122.792111ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:56:11.523717Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1564}
	{"level":"info","ts":"2025-04-07T12:56:11.537557Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1564,"took":"13.328253ms","hash":3499056740,"current-db-size-bytes":9375744,"current-db-size":"9.4 MB","current-db-size-in-use-bytes":5595136,"current-db-size-in-use":"5.6 MB"}
	{"level":"info","ts":"2025-04-07T12:56:11.537604Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3499056740,"revision":1564,"compact-revision":-1}
	{"level":"info","ts":"2025-04-07T13:01:11.528657Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2263}
	{"level":"info","ts":"2025-04-07T13:01:11.541819Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":2263,"took":"12.571715ms","hash":4284666082,"current-db-size-bytes":9375744,"current-db-size":"9.4 MB","current-db-size-in-use-bytes":2785280,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-04-07T13:01:11.541863Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4284666082,"revision":2263,"compact-revision":1564}
	
	
	==> kernel <==
	 13:01:17 up  4:43,  0 users,  load average: 0.11, 0.28, 0.96
	Linux ubuntu-20-agent 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [1b21328ae243] <==
	E0407 12:47:30.984768       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:47:30.996198       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:47:30.996239       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	W0407 12:47:49.957919       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.49.30:443: connect: connection refused
	E0407 12:47:49.957967       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.49.30:443: connect: connection refused" logger="UnhandledError"
	I0407 12:53:56.176161       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0407 12:53:56.193607       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0407 12:53:56.328791       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0407 12:53:56.341956       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0407 12:53:56.354989       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0407 12:53:56.521759       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0407 12:53:56.561633       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0407 12:53:56.611561       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0407 12:53:57.222671       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0407 12:53:57.482426       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0407 12:53:57.482476       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0407 12:53:57.482500       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0407 12:53:57.482810       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0407 12:53:57.612659       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0407 12:53:57.772672       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0407 12:54:16.224199       1 conn.go:339] Error on socket receive: read tcp 10.132.0.4:8443->10.132.0.4:57940: use of closed network connection
	E0407 12:54:16.405527       1 conn.go:339] Error on socket receive: read tcp 10.132.0.4:8443->10.132.0.4:57972: use of closed network connection
	I0407 12:54:44.627985       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0407 12:54:45.745688       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0407 12:55:46.413402       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [953db0d2f82d] <==
	E0407 13:00:42.460878       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:00:46.676920       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:00:46.677947       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="nodeinfo.volcano.sh/v1alpha1, Resource=numatopologies"
	W0407 13:00:46.678825       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:00:46.678862       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:00:51.165952       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:00:51.167017       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobflows"
	W0407 13:00:51.167920       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:00:51.167956       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:00:54.328807       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:00:54.329851       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="bus.volcano.sh/v1alpha1, Resource=commands"
	W0407 13:00:54.330755       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:00:54.330800       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:02.774519       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:02.775484       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=podgroups"
	W0407 13:01:02.776384       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:02.776439       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:03.957975       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:03.959041       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobtemplates"
	W0407 13:01:03.960150       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:03.960192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:01:15.037538       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:01:15.038924       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="batch.volcano.sh/v1alpha1, Resource=jobs"
	W0407 13:01:15.040039       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:01:15.040083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [e6de974948a2] <==
	I0407 12:46:21.832215       1 server_linux.go:66] "Using iptables proxy"
	I0407 12:46:22.002444       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["10.132.0.4"]
	E0407 12:46:22.002521       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:46:22.084578       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0407 12:46:22.084642       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:46:22.090930       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:46:22.091456       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:46:22.091487       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:46:22.104770       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:46:22.104822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:46:22.104856       1 config.go:199] "Starting service config controller"
	I0407 12:46:22.104861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:46:22.105247       1 config.go:329] "Starting node config controller"
	I0407 12:46:22.105262       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:46:22.207396       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:46:22.207478       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:46:22.211383       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8e962b9f0917] <==
	W0407 12:46:12.396702       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:46:12.396720       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.243091       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 12:46:13.243142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.305117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 12:46:13.305161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.312894       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:46:13.312941       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.314239       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.314279       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.357817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:46:13.357865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.450908       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.450956       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.517730       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:46:13.517783       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 12:46:13.524289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.524338       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.554955       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:46:13.554999       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.556960       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:46:13.556999       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:46:13.658851       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:46:13.658899       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0407 12:46:15.991211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2025-02-07 00:17:37 UTC, end at Mon 2025-04-07 13:01:17 UTC. --
	Apr 07 12:58:19 ubuntu-20-agent kubelet[1430849]: E0407 12:58:19.968340 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:58:25 ubuntu-20-agent kubelet[1430849]: I0407 12:58:25.968509 1430849 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-cc9755fc7-8d2gd" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 12:58:33 ubuntu-20-agent kubelet[1430849]: E0407 12:58:33.356444 1430849 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 07 12:58:33 ubuntu-20-agent kubelet[1430849]: E0407 12:58:33.356508 1430849 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 07 12:58:33 ubuntu-20-agent kubelet[1430849]: E0407 12:58:33.356620 1430849 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfbbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},Termina
tionMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(2eb63804-3289-4376-94f2-e061287276c0): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 12:58:33 ubuntu-20-agent kubelet[1430849]: E0407 12:58:33.357818 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:58:47 ubuntu-20-agent kubelet[1430849]: E0407 12:58:47.969061 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:59:00 ubuntu-20-agent kubelet[1430849]: E0407 12:59:00.969579 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:59:13 ubuntu-20-agent kubelet[1430849]: E0407 12:59:13.968803 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:59:26 ubuntu-20-agent kubelet[1430849]: E0407 12:59:26.969062 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:59:34 ubuntu-20-agent kubelet[1430849]: I0407 12:59:34.969326 1430849 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 12:59:37 ubuntu-20-agent kubelet[1430849]: E0407 12:59:37.969009 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:59:51 ubuntu-20-agent kubelet[1430849]: E0407 12:59:51.968897 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 12:59:53 ubuntu-20-agent kubelet[1430849]: I0407 12:59:53.969054 1430849 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-cc9755fc7-8d2gd" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 13:00:03 ubuntu-20-agent kubelet[1430849]: E0407 13:00:03.968665 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 13:00:16 ubuntu-20-agent kubelet[1430849]: E0407 13:00:16.969136 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 13:00:30 ubuntu-20-agent kubelet[1430849]: E0407 13:00:30.968894 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 13:00:43 ubuntu-20-agent kubelet[1430849]: E0407 13:00:43.968667 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 13:00:44 ubuntu-20-agent kubelet[1430849]: I0407 13:00:44.969081 1430849 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 13:00:55 ubuntu-20-agent kubelet[1430849]: I0407 13:00:55.969104 1430849 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-cc9755fc7-8d2gd" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 13:00:58 ubuntu-20-agent kubelet[1430849]: E0407 13:00:58.970735 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	Apr 07 13:01:15 ubuntu-20-agent kubelet[1430849]: E0407 13:01:15.378484 1430849 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 07 13:01:15 ubuntu-20-agent kubelet[1430849]: E0407 13:01:15.378557 1430849 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 07 13:01:15 ubuntu-20-agent kubelet[1430849]: E0407 13:01:15.378670 1430849 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfbbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},Termina
tionMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(2eb63804-3289-4376-94f2-e061287276c0): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 13:01:15 ubuntu-20-agent kubelet[1430849]: E0407 13:01:15.379854 1430849 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2eb63804-3289-4376-94f2-e061287276c0"
	
	
	==> storage-provisioner [9367d6480bcd] <==
	I0407 12:46:22.691767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:46:22.700993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:46:22.701760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:46:22.709645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:46:22.709901       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c!
	I0407 12:46:22.710495       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"088063e6-27ee-4b45-98d2-8cc5af467fa3", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c became leader
	I0407 12:46:22.810972       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_d2ced8c7-5bce-4be8-ab28-23171422388c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod task-pv-pod
helpers_test.go:282: (dbg) kubectl --context minikube describe pod task-pv-pod:

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent/10.132.0.4
	Start Time:       Mon, 07 Apr 2025 12:55:16 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfbbd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-zfbbd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/task-pv-pod to ubuntu-20-agent
	  Warning  Failed     5m12s (x2 over 5m43s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m47s (x5 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m45s (x3 over 6m)     kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m45s (x5 over 6m)     kubelet            Error: ErrImagePull
	  Warning  Failed     48s (x20 over 5m59s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    35s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.346447831s)
--- FAIL: TestAddons/parallel/CSI (388.72s)

                                                
                                    

Test pass (104/170)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.51
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 1.33
15 TestDownloadOnly/v1.32.2/binaries 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 0.13
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 41.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 112.85
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.51
35 TestAddons/parallel/Registry 14.85
37 TestAddons/parallel/InspektorGadget 11.43
38 TestAddons/parallel/MetricsServer 6.42
41 TestAddons/parallel/Headlamp 16.94
42 TestAddons/parallel/CloudSpanner 5.27
44 TestAddons/parallel/NvidiaDevicePlugin 6.25
45 TestAddons/parallel/Yakd 10.46
47 TestAddons/StoppedEnableDisable 10.69
49 TestCertExpiration 227.38
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 30.56
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 31.96
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.08
67 TestFunctional/serial/MinikubeKubectlCmd 0.12
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 38.92
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 0.87
72 TestFunctional/serial/LogsFileCmd 0.9
73 TestFunctional/serial/InvalidService 4.05
75 TestFunctional/parallel/ConfigCmd 0.3
76 TestFunctional/parallel/DashboardCmd 11.12
77 TestFunctional/parallel/DryRun 0.17
78 TestFunctional/parallel/InternationalLanguage 0.09
79 TestFunctional/parallel/StatusCmd 0.44
82 TestFunctional/parallel/ProfileCmd/profile_not_create 0.24
83 TestFunctional/parallel/ProfileCmd/profile_list 0.23
84 TestFunctional/parallel/ProfileCmd/profile_json_output 0.22
86 TestFunctional/parallel/ServiceCmd/DeployApp 10.15
87 TestFunctional/parallel/ServiceCmd/List 0.35
88 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
89 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
90 TestFunctional/parallel/ServiceCmd/Format 0.16
91 TestFunctional/parallel/ServiceCmd/URL 0.15
92 TestFunctional/parallel/ServiceCmdConnect 7.32
93 TestFunctional/parallel/AddonsCmd 0.13
94 TestFunctional/parallel/PersistentVolumeClaim 25.46
107 TestFunctional/parallel/MySQL 21.05
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.63
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.36
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.42
122 TestFunctional/parallel/License 0.63
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.02
125 TestFunctional/delete_minikube_cached_images 0.02
131 TestImageBuild/serial/Setup 14.73
132 TestImageBuild/serial/NormalBuild 1.02
133 TestImageBuild/serial/BuildWithBuildArg 0.67
134 TestImageBuild/serial/BuildWithDockerIgnore 0.38
135 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.38
139 TestJSONOutput/start/Command 27.81
140 TestJSONOutput/start/Audit 0
142 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
143 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
145 TestJSONOutput/pause/Command 0.51
146 TestJSONOutput/pause/Audit 0
148 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/unpause/Command 0.43
152 TestJSONOutput/unpause/Audit 0
154 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/stop/Command 10.43
158 TestJSONOutput/stop/Audit 0
160 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
162 TestErrorJSONOutput 0.21
167 TestMainNoArgs 0.05
168 TestMinikubeProfile 34.28
176 TestPause/serial/Start 26.94
177 TestPause/serial/SecondStartNoReconfiguration 29.66
178 TestPause/serial/Pause 0.51
179 TestPause/serial/VerifyStatus 0.14
180 TestPause/serial/Unpause 0.42
181 TestPause/serial/PauseAgain 0.54
182 TestPause/serial/DeletePaused 1.72
183 TestPause/serial/VerifyDeletedResources 0.08
197 TestRunningBinaryUpgrade 73.58
199 TestStoppedBinaryUpgrade/Setup 2.4
200 TestStoppedBinaryUpgrade/Upgrade 51.32
201 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
202 TestKubernetesUpgrade 306.44
x
+
TestDownloadOnly/v1.20.0/json-events (24.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (24.514126727s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (67.477897ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:44 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:44:52
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:44:52.302982 1425528 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:44:52.303103 1425528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:44:52.303112 1425528 out.go:358] Setting ErrFile to fd 2...
	I0407 12:44:52.303116 1425528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:44:52.303346 1425528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1418173/.minikube/bin
	W0407 12:44:52.303482 1425528 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20598-1418173/.minikube/config/config.json: open /home/jenkins/minikube-integration/20598-1418173/.minikube/config/config.json: no such file or directory
	I0407 12:44:52.304132 1425528 out.go:352] Setting JSON to true
	I0407 12:44:52.305083 1425528 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":16036,"bootTime":1744013856,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:44:52.305206 1425528 start.go:139] virtualization: kvm guest
	I0407 12:44:52.307368 1425528 out.go:97] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:44:52.307522 1425528 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:44:52.307585 1425528 notify.go:220] Checking for updates...
	I0407 12:44:52.308868 1425528 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:44:52.310189 1425528 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:44:52.311310 1425528 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 12:44:52.312429 1425528 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	I0407 12:44:52.313581 1425528 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:44:52.315940 1425528 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:44:52.316176 1425528 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:44:52.329230 1425528 out.go:97] Using the none driver based on user configuration
	I0407 12:44:52.329269 1425528 start.go:297] selected driver: none
	I0407 12:44:52.329275 1425528 start.go:901] validating driver "none" against <nil>
	I0407 12:44:52.329319 1425528 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0407 12:44:52.329690 1425528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:44:52.330238 1425528 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0407 12:44:52.330371 1425528 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:44:52.330398 1425528 cni.go:84] Creating CNI manager for ""
	I0407 12:44:52.330444 1425528 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 12:44:52.330485 1425528 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0407 12:44:52.332077 1425528 out.go:97] Starting "minikube" primary control-plane node in "minikube" cluster
	I0407 12:44:52.332453 1425528 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json ...
	I0407 12:44:52.332485 1425528 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1418173/.minikube/profiles/minikube/config.json: {Name:mk7435778f484db7c9644d73cb119c70d439299f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:44:52.332636 1425528 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:44:52.332879 1425528 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.20.0/kubectl
	I0407 12:44:52.332893 1425528 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.20.0/kubeadm
	I0407 12:44:52.333013 1425528 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/linux/amd64/v1.20.0/kubelet
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (1.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.332417339s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (1.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
--- PASS: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (68.088721ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:44 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC | 07 Apr 25 12:45 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.35.0 | 07 Apr 25 12:45 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:45:17
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:45:17.164789 1425671 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:45:17.164895 1425671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:45:17.164900 1425671 out.go:358] Setting ErrFile to fd 2...
	I0407 12:45:17.164903 1425671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:45:17.165104 1425671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1418173/.minikube/bin
	I0407 12:45:17.165685 1425671 out.go:352] Setting JSON to true
	I0407 12:45:17.166686 1425671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":16061,"bootTime":1744013856,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:45:17.166801 1425671 start.go:139] virtualization: kvm guest
	I0407 12:45:17.169217 1425671 out.go:97] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:45:17.169382 1425671 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:45:17.169432 1425671 notify.go:220] Checking for updates...
	I0407 12:45:17.170881 1425671 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:45:17.172468 1425671 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:45:17.174136 1425671 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 12:45:17.175775 1425671 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	I0407 12:45:17.177294 1425671 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:45:19.063378 1425516 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:38191 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (41.43s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.745067277s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.680124282s)
--- PASS: TestOffline (41.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (55.029047ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (53.845487ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (112.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=none --bootstrapper=kubeadm: (1m52.85315093s)
--- PASS: TestAddons/Setup (112.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context minikube create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context minikube create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5154a6f6-b3d7-4b4f-a840-f75c5f3428b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5154a6f6-b3d7-4b4f-a840-f75c5f3428b5] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003112984s
addons_test.go:633: (dbg) Run:  kubectl --context minikube exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context minikube describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context minikube exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.983097ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-kwnrb" [4fbcb06c-10f2-48eb-ae63-5c09b49e6099] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00349108s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gpv45" [1ee0f741-4f8b-4063-832c-bfc311b610aa] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003602857s
addons_test.go:331: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.356600676s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2025/04/07 12:54:38 [DEBUG] GET http://10.132.0.4:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qfz76" [27304764-d413-4cea-a1c9-743f97a042f1] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003535766s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable inspektor-gadget --alsologtostderr -v=1: (5.421754913s)
--- PASS: TestAddons/parallel/InspektorGadget (11.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 18.745991ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-kfmft" [723d2ed5-e3cb-4cc3-80d7-62e3c337502a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004323541s
addons_test.go:402: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-zwrf2" [544bda92-63b2-49ad-ad76-dc7998ade4cb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-zwrf2" [544bda92-63b2-49ad-ad76-dc7998ade4cb] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004255792s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.459253176s)
--- PASS: TestAddons/parallel/Headlamp (16.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-8d2gd" [18a0a759-0d6c-4211-b9eb-066f099ce93a] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00353605s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qtjqk" [861c99d3-8db6-4690-9b9a-9445eb29a1b1] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004215016s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-qf5qb" [61ddda36-99ba-4ba8-9606-da0afae3b815] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003505734s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.459944591s)
--- PASS: TestAddons/parallel/Yakd (10.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.361832205s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.69s)

                                                
                                    
x
+
TestCertExpiration (227.38s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.222012453s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.390601882s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.770024413s)
--- PASS: TestCertExpiration (227.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20598-1418173/.minikube/files/etc/test/nested/copy/1425516/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.559477312s)
--- PASS: TestFunctional/serial/StartWithProxy (30.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 13:06:33.772117 1425516 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (31.956798875s)
functional_test.go:680: soft start took 31.957405695s for "minikube" cluster.
I0407 13:07:05.729205 1425516 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (31.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.919231465s)
functional_test.go:778: restart took 38.919370649s for "minikube" cluster.
I0407 13:07:45.003285 1425516 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (38.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd553490800/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (175.742079ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.132.0.4:30813 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (48.917685ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (49.361437ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2025/04/07 13:08:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1465571: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (86.775626ms)

                                                
                                                
-- stdout --
	* minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:08:02.360336 1465957 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:08:02.360603 1465957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:08:02.360613 1465957 out.go:358] Setting ErrFile to fd 2...
	I0407 13:08:02.360617 1465957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:08:02.360828 1465957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1418173/.minikube/bin
	I0407 13:08:02.361370 1465957 out.go:352] Setting JSON to false
	I0407 13:08:02.363842 1465957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":17426,"bootTime":1744013856,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:08:02.363913 1465957 start.go:139] virtualization: kvm guest
	I0407 13:08:02.365885 1465957 out.go:177] * minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 13:08:02.367998 1465957 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 13:08:02.368052 1465957 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:08:02.368076 1465957 notify.go:220] Checking for updates...
	I0407 13:08:02.370667 1465957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:08:02.371894 1465957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 13:08:02.373230 1465957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	I0407 13:08:02.374404 1465957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:08:02.375502 1465957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:08:02.377060 1465957 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:08:02.377372 1465957 exec_runner.go:51] Run: systemctl --version
	I0407 13:08:02.379919 1465957 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:08:02.391857 1465957 out.go:177] * Using the none driver based on existing profile
	I0407 13:08:02.393069 1465957 start.go:297] selected driver: none
	I0407 13:08:02.393082 1465957 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:08:02.393195 1465957 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:08:02.393221 1465957 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0407 13:08:02.393500 1465957 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0407 13:08:02.395987 1465957 out.go:201] 
	W0407 13:08:02.397094 1465957 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 13:08:02.398331 1465957 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (88.364704ms)

                                                
                                                
-- stdout --
	* minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:08:02.533859 1465987 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:08:02.534110 1465987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:08:02.534118 1465987 out.go:358] Setting ErrFile to fd 2...
	I0407 13:08:02.534123 1465987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:08:02.534425 1465987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1418173/.minikube/bin
	I0407 13:08:02.534977 1465987 out.go:352] Setting JSON to false
	I0407 13:08:02.536126 1465987 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":17426,"bootTime":1744013856,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:08:02.536224 1465987 start.go:139] virtualization: kvm guest
	I0407 13:08:02.538302 1465987 out.go:177] * minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	W0407 13:08:02.539885 1465987 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-1418173/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 13:08:02.539921 1465987 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:08:02.539945 1465987 notify.go:220] Checking for updates...
	I0407 13:08:02.542717 1465987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:08:02.544156 1465987 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	I0407 13:08:02.545501 1465987 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	I0407 13:08:02.546820 1465987 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:08:02.548049 1465987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:08:02.549724 1465987 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:08:02.550037 1465987 exec_runner.go:51] Run: systemctl --version
	I0407 13:08:02.552652 1465987 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:08:02.564181 1465987 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0407 13:08:02.565634 1465987 start.go:297] selected driver: none
	I0407 13:08:02.565655 1465987 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.132.0.4 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:08:02.565747 1465987 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:08:02.565769 1465987 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0407 13:08:02.566064 1465987 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0407 13:08:02.568569 1465987 out.go:201] 
	W0407 13:08:02.569831 1465987 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 13:08:02.571055 1465987 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "178.96159ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "52.00269ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "166.293688ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "49.984517ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-nhfjf" [deab1b5c-53ac-4df6-af23-1cef194ba577] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-nhfjf" [deab1b5c-53ac-4df6-af23-1cef194ba577] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004469234s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1511: Took "341.805323ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://10.132.0.4:30699
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://10.132.0.4:30699
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-f4jdc" [e878e9ed-39b3-4ea4-9bad-a683630e4761] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-f4jdc" [e878e9ed-39b3-4ea4-9bad-a683630e4761] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004219998s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://10.132.0.4:31617
functional_test.go:1692: http://10.132.0.4:31617: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-f4jdc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.132.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.132.0.4:31617
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [70b84852-b533-4f6a-b4fe-63967b033186] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003399648s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [331c4494-c489-4976-b463-a3e947481108] Pending
helpers_test.go:344: "sp-pod" [331c4494-c489-4976-b463-a3e947481108] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [331c4494-c489-4976-b463-a3e947481108] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003884986s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [07ac9b91-39e5-4460-a211-b2a15047d3b0] Pending
helpers_test.go:344: "sp-pod" [07ac9b91-39e5-4460-a211-b2a15047d3b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [07ac9b91-39e5-4460-a211-b2a15047d3b0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003557623s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-bmdzd" [675fb6c5-74f2-4eb0-b7e3-e45a0a0e60b0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-bmdzd" [675fb6c5-74f2-4eb0-b7e3-e45a0a0e60b0] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003708231s
functional_test.go:1824: (dbg) Run:  kubectl --context minikube exec mysql-58ccfd96bb-bmdzd -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context minikube exec mysql-58ccfd96bb-bmdzd -- mysql -ppassword -e "show databases;": exit status 1 (114.872501ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:09:06.276310 1425516 retry.go:31] will retry after 994.030593ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context minikube exec mysql-58ccfd96bb-bmdzd -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context minikube exec mysql-58ccfd96bb-bmdzd -- mysql -ppassword -e "show databases;": exit status 1 (113.834206ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:09:07.385497 1425516 retry.go:31] will retry after 1.541438831s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context minikube exec mysql-58ccfd96bb-bmdzd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.62714339s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.359288689s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.728071745s)
--- PASS: TestImageBuild/serial/Setup (14.73s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.022354572s)
--- PASS: TestImageBuild/serial/NormalBuild (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.38s)

                                                
                                    
x
+
TestJSONOutput/start/Command (27.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (27.804890276s)
--- PASS: TestJSONOutput/start/Command (27.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.432612513s)
--- PASS: TestJSONOutput/stop/Command (10.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.81757ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5cd22b5f-7d3a-41c9-86c9-ad1f2a079f88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6533178-3531-4577-84b6-cf0ec1e70fad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"76eadced-63a9-4903-8926-d4e2ddbcd379","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4a424fcf-4ac3-44fb-899f-a7a39f26bfcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig"}}
	{"specversion":"1.0","id":"ecdf6ac9-cb82-4865-910d-8690608875d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube"}}
	{"specversion":"1.0","id":"548b9981-4008-4ebb-9528-bd2146203198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"10b56168-1d85-4138-a710-1e0b8c28deb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ca3daa9-fc13-4870-b36e-cb62016c0e17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.738390877s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.654583017s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.292977498s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.28s)

                                                
                                    
x
+
TestPause/serial/Start (26.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (26.93912994s)
--- PASS: TestPause/serial/Start (26.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.664081247s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (135.670513ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.42s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.42s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.72s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.721953982s)
--- PASS: TestPause/serial/DeletePaused (1.72s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.08s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4036611436 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4036611436 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (32.1259889s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.724542676s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.252399995s)
--- PASS: TestRunningBinaryUpgrade (73.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3041232082 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3041232082 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.238573341s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3041232082 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3041232082 -p minikube stop: (23.672649922s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.41052559s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (306.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (28.166714791s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.795384964s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (78.092738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.716477963s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (81.393846ms)

                                                
                                                
-- stdout --
	* minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-1418173/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1418173/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.263383009s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.275426597s)
--- PASS: TestKubernetesUpgrade (306.44s)

                                                
                                    

Test skip (64/170)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.32.2/preload-exists 0
14 TestDownloadOnly/v1.32.2/cached-images 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
36 TestAddons/parallel/Ingress 0
39 TestAddons/parallel/Olm 0
43 TestAddons/parallel/LocalPath 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
48 TestCertOptions 0
50 TestDockerFlags 0
51 TestForceSystemdFlag 0
52 TestForceSystemdEnv 0
53 TestDockerEnvContainerd 0
54 TestKVMDriverInstallOrUpdate 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
57 TestErrorSpam 0
66 TestFunctional/serial/CacheCmd 0
80 TestFunctional/parallel/MountCmd 0
97 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
99 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestFunctionalNewestKubernetes 0
127 TestGvisorAddon 0
128 TestMultiControlPlane 0
136 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
163 TestKicCustomNetwork 0
164 TestKicExistingNetwork 0
165 TestKicCustomSubnet 0
166 TestKicStaticIP 0
169 TestMountStart 0
170 TestMultiNode 0
171 TestNetworkPlugins 0
172 TestNoKubernetes 0
173 TestChangeNoneUser 0
184 TestPreload 0
185 TestScheduledStopWindows 0
186 TestScheduledStopUnix 0
187 TestSkaffold 0
190 TestStartStop/group/old-k8s-version 0.14
191 TestStartStop/group/newest-cni 0.14
192 TestStartStop/group/default-k8s-diff-port 0.15
193 TestStartStop/group/no-preload 0.15
194 TestStartStop/group/disable-driver-mounts 0.15
195 TestStartStop/group/embed-certs 0.15
196 TestInsufficientStorage 0
203 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:193: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:882: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1058: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1734: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1777: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1941: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1972: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:475: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:562: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:309: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2033: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:98: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:98: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:98: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:98: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.15s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:98: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:98: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.15s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard