Test Report: none_Linux 20052

                    
                      8d1e3f592e1f661c71a144f8266060bd168d3f35:2024-12-05:37356
                    
                

Test fail (1/169)

Order failed test Duration
29 TestAddons/serial/Volcano 372.88
x
+
TestAddons/serial/Volcano (372.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 10.636758ms
addons_test.go:807: volcano-scheduler stabilized in 10.654352ms
addons_test.go:815: volcano-admission stabilized in 10.8629ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-q7mcw" [33f5e98f-fb04-4f70-b72c-d223e4812765] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-scheduler" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:829: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:829: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:829: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-12-05 18:54:12.469023968 +0000 UTC m=+507.085004487
addons_test.go:829: (dbg) Run:  kubectl --context minikube describe po volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system
addons_test.go:829: (dbg) kubectl --context minikube describe po volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system:
Name:                 volcano-scheduler-6c9778cbdf-q7mcw
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 ubuntu-20-agent-15/10.128.15.240
Start Time:           Thu, 05 Dec 2024 18:46:52 +0000
Labels:               app=volcano-scheduler
pod-template-hash=6c9778cbdf
Annotations:          <none>
Status:               Pending
IP:                   10.244.0.17
IPs:
IP:           10.244.0.17
Controlled By:  ReplicaSet/volcano-scheduler-6c9778cbdf
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4bz59 (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:          HostPath (bare host directory volume)
Path:          /tmp/klog-socks
HostPathType:  
kube-api-access-4bz59:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  7m20s                   default-scheduler  Successfully assigned volcano-system/volcano-scheduler-6c9778cbdf-q7mcw to ubuntu-20-agent-15
Warning  Failed     6m41s                   kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    5m35s (x4 over 7m20s)   kubelet            Pulling image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Warning  Failed     5m34s (x3 over 6m55s)   kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     5m34s (x4 over 6m55s)   kubelet            Error: ErrImagePull
Warning  Failed     5m8s (x6 over 6m54s)    kubelet            Error: ImagePullBackOff
Normal   BackOff    2m17s (x18 over 6m54s)  kubelet            Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
addons_test.go:829: (dbg) Run:  kubectl --context minikube logs volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system
addons_test.go:829: (dbg) Non-zero exit: kubectl --context minikube logs volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system: exit status 1 (79.80895ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-6c9778cbdf-q7mcw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:829: kubectl --context minikube logs volcano-scheduler-6c9778cbdf-q7mcw -n volcano-system: exit status 1
addons_test.go:830: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.137495669s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| start   | --download-only -p             | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC |                     |
	|         | minikube --alsologtostderr     |          |         |         |                     |                     |
	|         | --binary-mirror                |          |         |         |                     |                     |
	|         | http://127.0.0.1:36049         |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| start   | -p minikube --alsologtostderr  | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:46 UTC |
	|         | -v=1 --memory=2048             |          |         |         |                     |                     |
	|         | --wait=true --driver=none      |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC | 05 Dec 24 18:46 UTC |
	| addons  | enable dashboard -p minikube   | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC |                     |
	| addons  | disable dashboard -p minikube  | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC |                     |
	| start   | -p minikube --wait=true        | minikube | jenkins | v1.34.0 | 05 Dec 24 18:46 UTC | 05 Dec 24 18:48 UTC |
	|         | --memory=4000                  |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --addons=registry              |          |         |         |                     |                     |
	|         | --addons=metrics-server        |          |         |         |                     |                     |
	|         | --addons=volumesnapshots       |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |          |         |         |                     |                     |
	|         | --addons=gcp-auth              |          |         |         |                     |                     |
	|         | --addons=cloud-spanner         |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin  |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano |          |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 18:46:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 18:46:30.769916  392706 out.go:345] Setting OutFile to fd 1 ...
	I1205 18:46:30.770042  392706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 18:46:30.770054  392706 out.go:358] Setting ErrFile to fd 2...
	I1205 18:46:30.770059  392706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 18:46:30.770279  392706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-381606/.minikube/bin
	I1205 18:46:30.771080  392706 out.go:352] Setting JSON to false
	I1205 18:46:30.772086  392706 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5340,"bootTime":1733419051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 18:46:30.772272  392706 start.go:139] virtualization: kvm guest
	I1205 18:46:30.774739  392706 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 18:46:30.776411  392706 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 18:46:30.776473  392706 notify.go:220] Checking for updates...
	W1205 18:46:30.776400  392706 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-381606/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 18:46:30.779362  392706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 18:46:30.780804  392706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	I1205 18:46:30.782296  392706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	I1205 18:46:30.783681  392706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 18:46:30.784965  392706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 18:46:30.786416  392706 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 18:46:30.797207  392706 out.go:177] * Using the none driver based on user configuration
	I1205 18:46:30.798748  392706 start.go:297] selected driver: none
	I1205 18:46:30.798772  392706 start.go:901] validating driver "none" against <nil>
	I1205 18:46:30.798787  392706 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 18:46:30.798827  392706 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W1205 18:46:30.799140  392706 out.go:270] ! The 'none' driver does not respect the --memory flag
	I1205 18:46:30.799692  392706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 18:46:30.799965  392706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 18:46:30.799998  392706 cni.go:84] Creating CNI manager for ""
	I1205 18:46:30.800154  392706 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 18:46:30.800167  392706 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 18:46:30.800242  392706 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 18:46:30.801784  392706 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I1205 18:46:30.803412  392706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/config.json ...
	I1205 18:46:30.803448  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/config.json: {Name:mk77089bbcdd696d611f941aa97c12acab7ba119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:30.803597  392706 start.go:360] acquireMachinesLock for minikube: {Name:mk65d6052f343498845971aaee546d269ff2c3cc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 18:46:30.803637  392706 start.go:364] duration metric: took 22.313µs to acquireMachinesLock for "minikube"
	I1205 18:46:30.803658  392706 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 18:46:30.803729  392706 start.go:125] createHost starting for "" (driver="none")
	I1205 18:46:30.805465  392706 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I1205 18:46:30.806776  392706 exec_runner.go:51] Run: systemctl --version
	I1205 18:46:30.809409  392706 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I1205 18:46:30.809439  392706 client.go:168] LocalClient.Create starting
	I1205 18:46:30.809520  392706 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-381606/.minikube/certs/ca.pem
	I1205 18:46:30.809558  392706 main.go:141] libmachine: Decoding PEM data...
	I1205 18:46:30.809584  392706 main.go:141] libmachine: Parsing certificate...
	I1205 18:46:30.809640  392706 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-381606/.minikube/certs/cert.pem
	I1205 18:46:30.809665  392706 main.go:141] libmachine: Decoding PEM data...
	I1205 18:46:30.809691  392706 main.go:141] libmachine: Parsing certificate...
	I1205 18:46:30.810115  392706 client.go:171] duration metric: took 667.599µs to LocalClient.Create
	I1205 18:46:30.810143  392706 start.go:167] duration metric: took 736.433µs to libmachine.API.Create "minikube"
	I1205 18:46:30.810155  392706 start.go:293] postStartSetup for "minikube" (driver="none")
	I1205 18:46:30.810203  392706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 18:46:30.810256  392706 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 18:46:30.820595  392706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 18:46:30.820618  392706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 18:46:30.820626  392706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 18:46:30.822589  392706 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I1205 18:46:30.823891  392706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-381606/.minikube/addons for local assets ...
	I1205 18:46:30.823936  392706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-381606/.minikube/files for local assets ...
	I1205 18:46:30.823957  392706 start.go:296] duration metric: took 13.796067ms for postStartSetup
	I1205 18:46:30.824582  392706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/config.json ...
	I1205 18:46:30.824717  392706 start.go:128] duration metric: took 20.970392ms to createHost
	I1205 18:46:30.824732  392706 start.go:83] releasing machines lock for "minikube", held for 21.082813ms
	I1205 18:46:30.825166  392706 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 18:46:30.825248  392706 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W1205 18:46:30.827158  392706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 18:46:30.827217  392706 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 18:46:30.838011  392706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 18:46:30.838043  392706 start.go:495] detecting cgroup driver to use...
	I1205 18:46:30.838088  392706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 18:46:30.838224  392706 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 18:46:30.859469  392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1205 18:46:30.870837  392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 18:46:30.881521  392706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 18:46:30.881608  392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 18:46:30.891718  392706 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 18:46:30.904467  392706 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 18:46:30.915222  392706 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 18:46:30.925150  392706 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 18:46:30.934003  392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 18:46:30.943119  392706 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 18:46:30.953272  392706 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 18:46:30.964278  392706 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 18:46:30.973483  392706 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 18:46:30.981164  392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I1205 18:46:31.202016  392706 exec_runner.go:51] Run: sudo systemctl restart containerd
	I1205 18:46:31.270033  392706 start.go:495] detecting cgroup driver to use...
	I1205 18:46:31.270084  392706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 18:46:31.270202  392706 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 18:46:31.292954  392706 exec_runner.go:51] Run: which cri-dockerd
	I1205 18:46:31.294048  392706 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 18:46:31.303897  392706 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I1205 18:46:31.303933  392706 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I1205 18:46:31.303982  392706 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I1205 18:46:31.312945  392706 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1205 18:46:31.313105  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3548394414 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I1205 18:46:31.321553  392706 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I1205 18:46:31.539653  392706 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I1205 18:46:31.773626  392706 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 18:46:31.773803  392706 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I1205 18:46:31.773820  392706 exec_runner.go:203] rm: /etc/docker/daemon.json
	I1205 18:46:31.773861  392706 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I1205 18:46:31.782777  392706 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I1205 18:46:31.782930  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1139489101 /etc/docker/daemon.json
	I1205 18:46:31.792259  392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I1205 18:46:32.033938  392706 exec_runner.go:51] Run: sudo systemctl restart docker
	I1205 18:46:32.372403  392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 18:46:32.385156  392706 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I1205 18:46:32.404724  392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 18:46:32.419377  392706 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I1205 18:46:32.653485  392706 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I1205 18:46:32.890583  392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I1205 18:46:33.122075  392706 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I1205 18:46:33.137270  392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 18:46:33.150523  392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I1205 18:46:33.387376  392706 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I1205 18:46:33.463329  392706 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 18:46:33.463431  392706 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I1205 18:46:33.465011  392706 start.go:563] Will wait 60s for crictl version
	I1205 18:46:33.465055  392706 exec_runner.go:51] Run: which crictl
	I1205 18:46:33.466002  392706 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I1205 18:46:33.500298  392706 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1205 18:46:33.500406  392706 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I1205 18:46:33.523254  392706 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I1205 18:46:33.549111  392706 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1205 18:46:33.549216  392706 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I1205 18:46:33.552598  392706 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I1205 18:46:33.553992  392706 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 18:46:33.554162  392706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 18:46:33.554173  392706 kubeadm.go:934] updating node { 10.128.15.240 8443 v1.31.2 docker true true} ...
	I1205 18:46:33.554279  392706 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-15 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.240 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I1205 18:46:33.554340  392706 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I1205 18:46:33.605060  392706 cni.go:84] Creating CNI manager for ""
	I1205 18:46:33.605089  392706 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 18:46:33.605106  392706 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 18:46:33.605131  392706 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.240 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-15 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 18:46:33.605274  392706 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-15"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "10.128.15.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 18:46:33.605344  392706 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 18:46:33.614930  392706 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 18:46:33.614987  392706 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 18:46:33.625021  392706 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 18:46:33.625038  392706 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 18:46:33.625077  392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I1205 18:46:33.625084  392706 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 18:46:33.625119  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 18:46:33.625132  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 18:46:33.637463  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 18:46:33.676053  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2709502434 /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 18:46:33.681993  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3191562976 /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 18:46:33.706012  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3852283944 /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 18:46:33.774226  392706 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 18:46:33.783978  392706 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I1205 18:46:33.784031  392706 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I1205 18:46:33.784072  392706 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I1205 18:46:33.794212  392706 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1205 18:46:33.794832  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3615464511 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I1205 18:46:33.806248  392706 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I1205 18:46:33.806272  392706 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I1205 18:46:33.806315  392706 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I1205 18:46:33.814811  392706 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 18:46:33.815017  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1960753144 /lib/systemd/system/kubelet.service
	I1205 18:46:33.824391  392706 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I1205 18:46:33.824567  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2327795690 /var/tmp/minikube/kubeadm.yaml.new
	I1205 18:46:33.834289  392706 exec_runner.go:51] Run: grep 10.128.15.240	control-plane.minikube.internal$ /etc/hosts
	I1205 18:46:33.835736  392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I1205 18:46:34.053075  392706 exec_runner.go:51] Run: sudo systemctl start kubelet
	I1205 18:46:34.068755  392706 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube for IP: 10.128.15.240
	I1205 18:46:34.068778  392706 certs.go:194] generating shared ca certs ...
	I1205 18:46:34.068803  392706 certs.go:226] acquiring lock for ca certs: {Name:mk9c2572d767bddb7155b721ed33333cb21d53bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:34.068988  392706 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.key
	I1205 18:46:34.069041  392706 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-381606/.minikube/proxy-client-ca.key
	I1205 18:46:34.069052  392706 certs.go:256] generating profile certs ...
	I1205 18:46:34.069124  392706 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.key
	I1205 18:46:34.069143  392706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.crt with IP's: []
	I1205 18:46:34.341279  392706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.crt ...
	I1205 18:46:34.341316  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.crt: {Name:mk08c5e544f65da5094f7bd202bf374884568ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:34.341476  392706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.key ...
	I1205 18:46:34.341489  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/client.key: {Name:mkcb634696cd1738afddbc3bec63dcd527f9beaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:34.341554  392706 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d
	I1205 18:46:34.341568  392706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.240]
	I1205 18:46:34.424022  392706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d ...
	I1205 18:46:34.424058  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d: {Name:mkbc27c587d344d6ba9d2761951e0622a5123980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:34.424201  392706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d ...
	I1205 18:46:34.424213  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d: {Name:mkfadbb838d7f9bbe16a3192eff07dfb0b6fc080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:34.424269  392706 certs.go:381] copying /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt.271ff23d -> /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt
	I1205 18:46:34.424365  392706 certs.go:385] copying /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key.271ff23d -> /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key
	I1205 18:46:34.424430  392706 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key
	I1205 18:46:34.424445  392706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I1205 18:46:34.554472  392706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt ...
	I1205 18:46:34.554508  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt: {Name:mk31608fded1fe7ee0c5fdee7eb3e4fb9debe10d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:34.554642  392706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key ...
	I1205 18:46:34.554653  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key: {Name:mkfd8193014e2c724ee548fc6504a8972edc6a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:34.554810  392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 18:46:34.554844  392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/ca.pem (1082 bytes)
	I1205 18:46:34.554868  392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/cert.pem (1123 bytes)
	I1205 18:46:34.554890  392706 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-381606/.minikube/certs/key.pem (1675 bytes)
	I1205 18:46:34.555657  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 18:46:34.555795  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1931116226 /var/lib/minikube/certs/ca.crt
	I1205 18:46:34.566283  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 18:46:34.566423  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3311133801 /var/lib/minikube/certs/ca.key
	I1205 18:46:34.576192  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 18:46:34.576329  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube621261124 /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 18:46:34.584829  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 18:46:34.585049  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1662520478 /var/lib/minikube/certs/proxy-client-ca.key
	I1205 18:46:34.594239  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I1205 18:46:34.594365  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3655454639 /var/lib/minikube/certs/apiserver.crt
	I1205 18:46:34.602741  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 18:46:34.602905  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4229425421 /var/lib/minikube/certs/apiserver.key
	I1205 18:46:34.611536  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 18:46:34.611703  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube636842749 /var/lib/minikube/certs/proxy-client.crt
	I1205 18:46:34.620139  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 18:46:34.620303  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2921387457 /var/lib/minikube/certs/proxy-client.key
	I1205 18:46:34.630051  392706 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I1205 18:46:34.630084  392706 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I1205 18:46:34.630136  392706 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I1205 18:46:34.638522  392706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-381606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 18:46:34.638699  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2521431643 /usr/share/ca-certificates/minikubeCA.pem
	I1205 18:46:34.647731  392706 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 18:46:34.647873  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2100674310 /var/lib/minikube/kubeconfig
	I1205 18:46:34.658040  392706 exec_runner.go:51] Run: openssl version
	I1205 18:46:34.662113  392706 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 18:46:34.672036  392706 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 18:46:34.673748  392706 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Dec  5 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I1205 18:46:34.673808  392706 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 18:46:34.676832  392706 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 18:46:34.686061  392706 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 18:46:34.687375  392706 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 18:46:34.687420  392706 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 18:46:34.687565  392706 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 18:46:34.705382  392706 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 18:46:34.714798  392706 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 18:46:34.724606  392706 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I1205 18:46:34.747922  392706 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 18:46:34.757968  392706 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 18:46:34.757992  392706 kubeadm.go:157] found existing configuration files:
	
	I1205 18:46:34.758038  392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 18:46:34.766673  392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 18:46:34.766744  392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 18:46:34.776134  392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 18:46:34.786200  392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 18:46:34.786263  392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 18:46:34.794269  392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 18:46:34.803428  392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 18:46:34.803504  392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 18:46:34.811344  392706 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 18:46:34.820187  392706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 18:46:34.820247  392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 18:46:34.828194  392706 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 18:46:34.870510  392706 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 18:46:34.870546  392706 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 18:46:34.964502  392706 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 18:46:34.964656  392706 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 18:46:34.964684  392706 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 18:46:34.964694  392706 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 18:46:34.975934  392706 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 18:46:34.978913  392706 out.go:235]   - Generating certificates and keys ...
	I1205 18:46:34.978965  392706 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 18:46:34.978977  392706 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 18:46:35.253026  392706 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 18:46:35.562792  392706 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 18:46:35.631580  392706 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 18:46:35.716662  392706 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 18:46:35.898010  392706 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 18:46:35.898073  392706 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
	I1205 18:46:35.949614  392706 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 18:46:35.949666  392706 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
	I1205 18:46:36.038237  392706 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 18:46:36.164527  392706 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 18:46:36.290658  392706 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 18:46:36.290773  392706 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 18:46:36.508311  392706 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 18:46:36.849132  392706 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 18:46:37.123451  392706 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 18:46:37.332695  392706 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 18:46:37.541415  392706 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 18:46:37.541904  392706 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 18:46:37.544212  392706 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 18:46:37.546813  392706 out.go:235]   - Booting up control plane ...
	I1205 18:46:37.546853  392706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 18:46:37.546881  392706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 18:46:37.546889  392706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 18:46:37.563899  392706 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 18:46:37.570039  392706 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 18:46:37.570090  392706 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 18:46:37.808962  392706 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 18:46:37.808989  392706 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 18:46:38.310635  392706 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.662223ms
	I1205 18:46:38.310664  392706 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 18:46:42.812379  392706 kubeadm.go:310] [api-check] The API server is healthy after 4.50172s
	I1205 18:46:42.824082  392706 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 18:46:42.836756  392706 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 18:46:42.856422  392706 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 18:46:42.856470  392706 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-15 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 18:46:42.865099  392706 kubeadm.go:310] [bootstrap-token] Using token: iryjon.gzw4zhozj14dvsi7
	I1205 18:46:42.866537  392706 out.go:235]   - Configuring RBAC rules ...
	I1205 18:46:42.866575  392706 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 18:46:42.870316  392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 18:46:42.876039  392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 18:46:42.880393  392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 18:46:42.883210  392706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 18:46:42.887019  392706 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 18:46:43.218073  392706 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 18:46:43.641108  392706 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 18:46:44.219410  392706 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 18:46:44.220430  392706 kubeadm.go:310] 
	I1205 18:46:44.220448  392706 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 18:46:44.220453  392706 kubeadm.go:310] 
	I1205 18:46:44.220457  392706 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 18:46:44.220461  392706 kubeadm.go:310] 
	I1205 18:46:44.220465  392706 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 18:46:44.220469  392706 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 18:46:44.220472  392706 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 18:46:44.220476  392706 kubeadm.go:310] 
	I1205 18:46:44.220480  392706 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 18:46:44.220483  392706 kubeadm.go:310] 
	I1205 18:46:44.220487  392706 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 18:46:44.220490  392706 kubeadm.go:310] 
	I1205 18:46:44.220493  392706 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 18:46:44.220496  392706 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 18:46:44.220500  392706 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 18:46:44.220503  392706 kubeadm.go:310] 
	I1205 18:46:44.220507  392706 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 18:46:44.220511  392706 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 18:46:44.220515  392706 kubeadm.go:310] 
	I1205 18:46:44.220518  392706 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iryjon.gzw4zhozj14dvsi7 \
	I1205 18:46:44.220523  392706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dbe841b1e28f4a104101b2a84f1789a91b89b2acf49afcea7c16961b03ff18e5 \
	I1205 18:46:44.220527  392706 kubeadm.go:310] 	--control-plane 
	I1205 18:46:44.220531  392706 kubeadm.go:310] 
	I1205 18:46:44.220535  392706 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 18:46:44.220540  392706 kubeadm.go:310] 
	I1205 18:46:44.220543  392706 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iryjon.gzw4zhozj14dvsi7 \
	I1205 18:46:44.220547  392706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dbe841b1e28f4a104101b2a84f1789a91b89b2acf49afcea7c16961b03ff18e5 
	I1205 18:46:44.223810  392706 cni.go:84] Creating CNI manager for ""
	I1205 18:46:44.223839  392706 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 18:46:44.225789  392706 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 18:46:44.227153  392706 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I1205 18:46:44.239639  392706 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 18:46:44.239910  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1866476991 /etc/cni/net.d/1-k8s.conflist
	I1205 18:46:44.252194  392706 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 18:46:44.252285  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:44.252332  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-15 minikube.k8s.io/updated_at=2024_12_05T18_46_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I1205 18:46:44.262941  392706 ops.go:34] apiserver oom_adj: -16
	I1205 18:46:44.336837  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:44.837016  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:45.336957  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:45.837673  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:46.337692  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:46.837740  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:47.337022  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:47.837502  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:48.337499  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:48.837044  392706 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 18:46:48.905831  392706 kubeadm.go:1113] duration metric: took 4.65361861s to wait for elevateKubeSystemPrivileges
	I1205 18:46:48.905876  392706 kubeadm.go:394] duration metric: took 14.218460262s to StartCluster
	I1205 18:46:48.905903  392706 settings.go:142] acquiring lock: {Name:mkdc0d6b86a842b5cd5a6cd70ea78a4ffd7cbb13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:48.906005  392706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-381606/kubeconfig
	I1205 18:46:48.906883  392706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-381606/kubeconfig: {Name:mk94906aabd0acbaafc4c687aa549eead9ea1dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 18:46:48.907140  392706 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 18:46:48.907193  392706 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:true volumesnapshots:true yakd:true]
	I1205 18:46:48.907342  392706 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I1205 18:46:48.907373  392706 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I1205 18:46:48.907380  392706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I1205 18:46:48.907389  392706 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I1205 18:46:48.907395  392706 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I1205 18:46:48.907440  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.907448  392706 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I1205 18:46:48.907457  392706 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 18:46:48.907536  392706 addons.go:69] Setting metrics-server=true in profile "minikube"
	I1205 18:46:48.907559  392706 addons.go:234] Setting addon metrics-server=true in "minikube"
	I1205 18:46:48.907577  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.907601  392706 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I1205 18:46:48.907631  392706 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I1205 18:46:48.907663  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.907734  392706 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I1205 18:46:48.907813  392706 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I1205 18:46:48.907860  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.908210  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.908227  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.908229  392706 addons.go:69] Setting volcano=true in profile "minikube"
	I1205 18:46:48.908243  392706 addons.go:234] Setting addon volcano=true in "minikube"
	I1205 18:46:48.908264  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.908268  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.908393  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.908408  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.908434  392706 addons.go:69] Setting amd-gpu-device-plugin=true in profile "minikube"
	I1205 18:46:48.908449  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.908457  392706 addons.go:234] Setting addon amd-gpu-device-plugin=true in "minikube"
	I1205 18:46:48.908470  392706 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I1205 18:46:48.908483  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.908491  392706 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I1205 18:46:48.908547  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.908547  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.908545  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.908633  392706 out.go:177] * Configuring local host environment ...
	I1205 18:46:48.908643  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.908706  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.908725  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.908767  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.908928  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.908946  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.908976  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.909233  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.909250  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.909278  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.909368  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.909384  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.909417  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.907521  392706 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I1205 18:46:48.909695  392706 mustload.go:65] Loading cluster: minikube
	I1205 18:46:48.909999  392706 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 18:46:48.912367  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.912392  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.912428  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.907502  392706 host.go:66] Checking if "minikube" exists ...
	W1205 18:46:48.912765  392706 out.go:270] * 
	W1205 18:46:48.912804  392706 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W1205 18:46:48.912820  392706 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W1205 18:46:48.912831  392706 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W1205 18:46:48.912838  392706 out.go:270] * 
	W1205 18:46:48.912922  392706 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W1205 18:46:48.912959  392706 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W1205 18:46:48.912998  392706 out.go:270] * 
	W1205 18:46:48.913085  392706 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	I1205 18:46:48.913564  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.907359  392706 addons.go:69] Setting registry=true in profile "minikube"
	I1205 18:46:48.907345  392706 addons.go:69] Setting yakd=true in profile "minikube"
	W1205 18:46:48.913729  392706 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W1205 18:46:48.914154  392706 out.go:270] * 
	W1205 18:46:48.914172  392706 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I1205 18:46:48.914201  392706 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 18:46:48.908213  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.914468  392706 addons.go:234] Setting addon registry=true in "minikube"
	I1205 18:46:48.914513  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.914468  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.907528  392706 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I1205 18:46:48.914612  392706 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I1205 18:46:48.914643  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.914654  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.914451  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.914441  392706 addons.go:234] Setting addon yakd=true in "minikube"
	I1205 18:46:48.914953  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:48.915069  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.915366  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.915412  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.915456  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.915473  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.915496  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.915460  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.915681  392706 out.go:177] * Verifying Kubernetes components...
	I1205 18:46:48.917216  392706 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I1205 18:46:48.941497  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:48.941536  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:48.941594  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:48.946352  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.946914  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.948558  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.948914  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.949750  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.951500  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.951704  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.952149  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.963660  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.965160  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.965908  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.966082  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.966774  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.966836  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.976553  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.976631  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.977010  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.977075  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.980250  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.980278  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.980328  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.980918  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.980989  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.981886  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.981945  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.985926  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.985997  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.987904  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.987980  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.989439  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.994571  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:48.995201  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:48.995232  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:48.998172  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:48.998262  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:48.999236  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:48.999283  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.005026  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.005060  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.005288  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.005462  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.005514  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.005544  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.005565  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:49.007372  392706 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 18:46:49.008025  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.008051  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.009172  392706 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 18:46:49.009207  392706 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 18:46:49.009496  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1566787695 /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 18:46:49.010124  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:49.010142  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.010160  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.010213  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:49.012763  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.012966  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.013656  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.013682  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.014209  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.015871  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.015993  392706 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1205 18:46:49.016070  392706 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 18:46:49.016096  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.016778  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.016256  392706 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 18:46:49.017593  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 18:46:49.018824  392706 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 18:46:49.018857  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 18:46:49.018993  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3553066936 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 18:46:49.019134  392706 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1205 18:46:49.019354  392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 18:46:49.019384  392706 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 18:46:49.019516  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1222195174 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 18:46:49.020245  392706 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 18:46:49.020351  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 18:46:49.020771  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3954816694 /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 18:46:49.023896  392706 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1205 18:46:49.024137  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.024836  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.024617  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.024662  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.026372  392706 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I1205 18:46:49.026459  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:49.027421  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:49.027448  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:49.027487  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:49.028033  392706 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 18:46:49.029726  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.031323  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 18:46:49.031602  392706 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 18:46:49.031638  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 18:46:49.031798  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1080143081 /etc/kubernetes/addons/deployment.yaml
	I1205 18:46:49.031987  392706 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1205 18:46:49.032032  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1205 18:46:49.033374  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3818939200 /etc/kubernetes/addons/volcano-deployment.yaml
	I1205 18:46:49.036041  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 18:46:49.038374  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 18:46:49.041266  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 18:46:49.043944  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 18:46:49.044080  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:49.045968  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:49.048487  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 18:46:49.050241  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 18:46:49.051549  392706 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 18:46:49.052687  392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 18:46:49.052729  392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 18:46:49.053285  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube715282980 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 18:46:49.056522  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:49.056589  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:49.060391  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.060425  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.060883  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.061035  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.061319  392706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 18:46:49.061356  392706 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 18:46:49.061413  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 18:46:49.061642  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2498433628 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 18:46:49.065196  392706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 18:46:49.065232  392706 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 18:46:49.065398  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube287044299 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 18:46:49.066327  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.066762  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 18:46:49.067381  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.069414  392706 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 18:46:49.071215  392706 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 18:46:49.071242  392706 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I1205 18:46:49.071251  392706 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 18:46:49.071295  392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 18:46:49.072963  392706 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 18:46:49.076971  392706 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 18:46:49.080424  392706 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 18:46:49.080467  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 18:46:49.080620  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube771930884 /etc/kubernetes/addons/registry-rc.yaml
	I1205 18:46:49.083624  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:49.084536  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 18:46:49.085611  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.085636  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.087832  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1205 18:46:49.090366  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.090582  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.090598  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.094185  392706 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 18:46:49.096408  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.096702  392706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 18:46:49.096723  392706 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 18:46:49.096739  392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 18:46:49.096739  392706 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 18:46:49.096873  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1450247784 /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 18:46:49.096992  392706 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 18:46:49.097012  392706 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 18:46:49.097097  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 18:46:49.096873  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4203213854 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 18:46:49.097797  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube384872491 /etc/kubernetes/addons/yakd-ns.yaml
	I1205 18:46:49.098393  392706 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 18:46:49.108468  392706 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 18:46:49.108523  392706 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 18:46:49.109342  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3725792852 /etc/kubernetes/addons/ig-crd.yaml
	I1205 18:46:49.111006  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 18:46:49.111225  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1391726845 /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 18:46:49.112674  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:49.112755  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:49.128271  392706 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 18:46:49.128862  392706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 18:46:49.131501  392706 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 18:46:49.131531  392706 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 18:46:49.131718  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3682795093 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 18:46:49.132624  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3731156731 /etc/kubernetes/addons/registry-svc.yaml
	I1205 18:46:49.134720  392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 18:46:49.134754  392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 18:46:49.134897  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1030573102 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 18:46:49.135070  392706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 18:46:49.135099  392706 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 18:46:49.135463  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2114620705 /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 18:46:49.144222  392706 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 18:46:49.144265  392706 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 18:46:49.144436  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1521659921 /etc/kubernetes/addons/yakd-sa.yaml
	I1205 18:46:49.165534  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 18:46:49.169249  392706 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 18:46:49.169283  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 18:46:49.169413  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2599884371 /etc/kubernetes/addons/registry-proxy.yaml
	I1205 18:46:49.169752  392706 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 18:46:49.169772  392706 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 18:46:49.169876  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube103325063 /etc/kubernetes/addons/yakd-crb.yaml
	I1205 18:46:49.172012  392706 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 18:46:49.172043  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 18:46:49.172214  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3021882922 /etc/kubernetes/addons/ig-deployment.yaml
	I1205 18:46:49.181424  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 18:46:49.182529  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:49.182563  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:49.187575  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:49.187635  392706 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 18:46:49.187656  392706 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I1205 18:46:49.187664  392706 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I1205 18:46:49.187712  392706 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I1205 18:46:49.190736  392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 18:46:49.190774  392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 18:46:49.190941  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2780910546 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 18:46:49.192778  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 18:46:49.193268  392706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 18:46:49.193308  392706 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 18:46:49.193471  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2786149826 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 18:46:49.195126  392706 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 18:46:49.195159  392706 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 18:46:49.195294  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1224436272 /etc/kubernetes/addons/yakd-svc.yaml
	I1205 18:46:49.208251  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 18:46:49.224055  392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 18:46:49.224110  392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 18:46:49.224301  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754282917 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 18:46:49.242523  392706 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 18:46:49.242702  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1320033730 /etc/kubernetes/addons/storageclass.yaml
	I1205 18:46:49.258586  392706 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 18:46:49.258633  392706 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 18:46:49.259110  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube429407668 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 18:46:49.278307  392706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 18:46:49.278351  392706 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 18:46:49.278504  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3309282558 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 18:46:49.301433  392706 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 18:46:49.301479  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 18:46:49.301635  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1214070606 /etc/kubernetes/addons/yakd-dp.yaml
	I1205 18:46:49.307202  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 18:46:49.322344  392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 18:46:49.322385  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 18:46:49.322592  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1717115819 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 18:46:49.352979  392706 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 18:46:49.353020  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 18:46:49.353171  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1446112244 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 18:46:49.378340  392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 18:46:49.378391  392706 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 18:46:49.378555  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3957832376 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 18:46:49.384608  392706 exec_runner.go:51] Run: sudo systemctl start kubelet
	I1205 18:46:49.409193  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 18:46:49.409375  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 18:46:49.519758  392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 18:46:49.519895  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 18:46:49.520566  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1367396723 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 18:46:49.543940  392706 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-15" to be "Ready" ...
	I1205 18:46:49.547610  392706 node_ready.go:49] node "ubuntu-20-agent-15" has status "Ready":"True"
	I1205 18:46:49.547634  392706 node_ready.go:38] duration metric: took 3.656512ms for node "ubuntu-20-agent-15" to be "Ready" ...
	I1205 18:46:49.547649  392706 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 18:46:49.559036  392706 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:49.566396  392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 18:46:49.566444  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 18:46:49.566621  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2010582015 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 18:46:49.575763  392706 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I1205 18:46:49.638685  392706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 18:46:49.638725  392706 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 18:46:49.638898  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3300979419 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 18:46:49.796061  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 18:46:50.080153  392706 addons.go:475] Verifying addon registry=true in "minikube"
	I1205 18:46:50.081204  392706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I1205 18:46:50.086096  392706 out.go:177] * Verifying registry addon...
	I1205 18:46:50.103769  392706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 18:46:50.108866  392706 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 18:46:50.109050  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:50.349124  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.16763377s)
	I1205 18:46:50.349171  392706 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I1205 18:46:50.430569  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.264982863s)
	I1205 18:46:50.488697  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.280391185s)
	I1205 18:46:50.607892  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:50.729988  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.32055571s)
	I1205 18:46:50.732160  392706 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I1205 18:46:51.134079  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:51.261689  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.852152198s)
	W1205 18:46:51.261990  392706 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 18:46:51.262937  392706 retry.go:31] will retry after 308.794095ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 18:46:51.566243  392706 pod_ready.go:103] pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace has status "Ready":"False"
	I1205 18:46:51.572512  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 18:46:51.609726  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:52.122818  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:52.454449  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.658312859s)
	I1205 18:46:52.454497  392706 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I1205 18:46:52.456369  392706 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 18:46:52.459655  392706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 18:46:52.473546  392706 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 18:46:52.473580  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:52.483248  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.395369671s)
	I1205 18:46:52.609342  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:52.972944  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:53.109078  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:53.464673  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:53.565278  392706 pod_ready.go:93] pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace has status "Ready":"True"
	I1205 18:46:53.565300  392706 pod_ready.go:82] duration metric: took 4.006135965s for pod "amd-gpu-device-plugin-jjc5z" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:53.565311  392706 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:53.607858  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:53.963829  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:54.108586  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:54.431194  392706 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.858625696s)
	I1205 18:46:54.465140  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:54.608155  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:54.966090  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:55.109011  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:55.465700  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:55.571183  392706 pod_ready.go:103] pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace has status "Ready":"False"
	I1205 18:46:55.609061  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:55.965662  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:56.015583  392706 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 18:46:56.015749  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2113576741 /var/lib/minikube/google_application_credentials.json
	I1205 18:46:56.028499  392706 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 18:46:56.028661  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2241339044 /var/lib/minikube/google_cloud_project
	I1205 18:46:56.041206  392706 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I1205 18:46:56.041297  392706 host.go:66] Checking if "minikube" exists ...
	I1205 18:46:56.042224  392706 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I1205 18:46:56.042255  392706 api_server.go:166] Checking apiserver status ...
	I1205 18:46:56.042296  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:46:56.065444  392706 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/394012/cgroup
	I1205 18:46:56.079882  392706 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f"
	I1205 18:46:56.079965  392706 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb519627eb85f5ecd7d5a34384dec33a/6201519c962ced2cd45d683d32451718ad215f30dd6f50a6773f94a40323c52f/freezer.state
	I1205 18:46:56.092754  392706 api_server.go:204] freezer state: "THAWED"
	I1205 18:46:56.092795  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:46:56.098421  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:46:56.098508  392706 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 18:46:56.102542  392706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 18:46:56.104274  392706 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 18:46:56.105737  392706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 18:46:56.105785  392706 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 18:46:56.105949  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2805309064 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 18:46:56.109347  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:56.120808  392706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 18:46:56.120858  392706 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 18:46:56.121022  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2640671215 /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 18:46:56.133706  392706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 18:46:56.133739  392706 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 18:46:56.133858  392706 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3351501913 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 18:46:56.146241  392706 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 18:46:56.465367  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:56.570596  392706 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I1205 18:46:56.572698  392706 out.go:177] * Verifying gcp-auth addon...
	I1205 18:46:56.574946  392706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 18:46:56.578226  392706 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 18:46:56.680479  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:56.965308  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:57.107918  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:57.466014  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:57.702153  392706 pod_ready.go:103] pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace has status "Ready":"False"
	I1205 18:46:57.702829  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:57.972855  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:58.107912  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:58.464740  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:58.568513  392706 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fwgcs" not found
	I1205 18:46:58.568550  392706 pod_ready.go:82] duration metric: took 5.003232034s for pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace to be "Ready" ...
	E1205 18:46:58.568567  392706 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-fwgcs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-fwgcs" not found
	I1205 18:46:58.568576  392706 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zk8jj" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.574604  392706 pod_ready.go:93] pod "coredns-7c65d6cfc9-zk8jj" in "kube-system" namespace has status "Ready":"True"
	I1205 18:46:58.574633  392706 pod_ready.go:82] duration metric: took 6.048206ms for pod "coredns-7c65d6cfc9-zk8jj" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.574645  392706 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.582640  392706 pod_ready.go:93] pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I1205 18:46:58.582668  392706 pod_ready.go:82] duration metric: took 8.015057ms for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.582682  392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.587364  392706 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I1205 18:46:58.587391  392706 pod_ready.go:82] duration metric: took 4.700049ms for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.587404  392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.592047  392706 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I1205 18:46:58.592069  392706 pod_ready.go:82] duration metric: took 4.65891ms for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.592079  392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-469rp" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.607580  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:58.769820  392706 pod_ready.go:93] pod "kube-proxy-469rp" in "kube-system" namespace has status "Ready":"True"
	I1205 18:46:58.769856  392706 pod_ready.go:82] duration metric: took 177.768557ms for pod "kube-proxy-469rp" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.769873  392706 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:58.965222  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:59.108651  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:59.169189  392706 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I1205 18:46:59.169271  392706 pod_ready.go:82] duration metric: took 399.388808ms for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:59.169292  392706 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace to be "Ready" ...
	I1205 18:46:59.465083  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:46:59.680775  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:46:59.965320  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:00.108296  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:00.464288  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:00.607729  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:00.965067  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:01.199066  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:01.203216  392706 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace has status "Ready":"False"
	I1205 18:47:01.464207  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:01.608182  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:01.965448  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:02.107681  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:02.465293  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:02.608187  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:02.675281  392706 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace has status "Ready":"True"
	I1205 18:47:02.675312  392706 pod_ready.go:82] duration metric: took 3.506010384s for pod "nvidia-device-plugin-daemonset-ztwcn" in "kube-system" namespace to be "Ready" ...
	I1205 18:47:02.675325  392706 pod_ready.go:39] duration metric: took 13.127659766s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 18:47:02.675351  392706 api_server.go:52] waiting for apiserver process to appear ...
	I1205 18:47:02.675427  392706 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 18:47:02.695187  392706 api_server.go:72] duration metric: took 13.78094142s to wait for apiserver process to appear ...
	I1205 18:47:02.695218  392706 api_server.go:88] waiting for apiserver healthz status ...
	I1205 18:47:02.695249  392706 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I1205 18:47:02.699816  392706 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I1205 18:47:02.700810  392706 api_server.go:141] control plane version: v1.31.2
	I1205 18:47:02.700838  392706 api_server.go:131] duration metric: took 5.610942ms to wait for apiserver health ...
	I1205 18:47:02.700849  392706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 18:47:02.708782  392706 system_pods.go:59] 17 kube-system pods found
	I1205 18:47:02.708834  392706 system_pods.go:61] "amd-gpu-device-plugin-jjc5z" [f828cfe8-480f-42a5-8e47-eb2a2e5f4a1e] Running
	I1205 18:47:02.708845  392706 system_pods.go:61] "coredns-7c65d6cfc9-zk8jj" [7adf42d9-14af-4e94-adae-b04af746e283] Running
	I1205 18:47:02.708856  392706 system_pods.go:61] "csi-hostpath-attacher-0" [7d308609-1109-41ee-919c-93fefc7b9d56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 18:47:02.708874  392706 system_pods.go:61] "csi-hostpath-resizer-0" [68b7b7f0-6085-4d0a-a17c-4a86015fa4ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 18:47:02.708890  392706 system_pods.go:61] "csi-hostpathplugin-6l6p5" [72ecf43c-9c33-4354-bb62-a25130b9ed65] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 18:47:02.708928  392706 system_pods.go:61] "etcd-ubuntu-20-agent-15" [662d8ffa-fc3a-41fc-a149-15e7136dc6ad] Running
	I1205 18:47:02.708936  392706 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-15" [6fdda757-9d19-4e58-a9de-3eb01f3c222d] Running
	I1205 18:47:02.708944  392706 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-15" [77968090-16a4-4af1-a253-1b9c1c84b83f] Running
	I1205 18:47:02.708949  392706 system_pods.go:61] "kube-proxy-469rp" [0f95cbc3-0d36-4d85-b1a3-3271dbb30d28] Running
	I1205 18:47:02.708954  392706 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-15" [e7f18375-954d-47e5-badf-a043eb4a045b] Running
	I1205 18:47:02.708962  392706 system_pods.go:61] "metrics-server-84c5f94fbc-4rstm" [dfef15df-0ac2-42d6-ae56-67fdb95b6a8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 18:47:02.708968  392706 system_pods.go:61] "nvidia-device-plugin-daemonset-ztwcn" [95079423-3a8c-43d2-af27-55852564e9ae] Running
	I1205 18:47:02.708977  392706 system_pods.go:61] "registry-66c9cd494c-jgf47" [9f55f79d-b172-464c-9881-382ccbd93912] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 18:47:02.708984  392706 system_pods.go:61] "registry-proxy-wl4vl" [5cf2fdd8-e0ad-481c-b4ee-4307a7236b36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 18:47:02.708995  392706 system_pods.go:61] "snapshot-controller-56fcc65765-ksj7l" [88702745-5bf8-4e07-a722-327cdbc69b9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 18:47:02.709005  392706 system_pods.go:61] "snapshot-controller-56fcc65765-v98wh" [fe983053-ea62-4dc3-9c0f-ecd39b63919e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 18:47:02.709011  392706 system_pods.go:61] "storage-provisioner" [b74c2937-c6b0-4e32-b3f8-b9b13659a848] Running
	I1205 18:47:02.709021  392706 system_pods.go:74] duration metric: took 8.163275ms to wait for pod list to return data ...
	I1205 18:47:02.709031  392706 default_sa.go:34] waiting for default service account to be created ...
	I1205 18:47:02.769708  392706 default_sa.go:45] found service account: "default"
	I1205 18:47:02.769738  392706 default_sa.go:55] duration metric: took 60.698722ms for default service account to be created ...
	I1205 18:47:02.769752  392706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 18:47:02.964997  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:02.975075  392706 system_pods.go:86] 17 kube-system pods found
	I1205 18:47:02.975111  392706 system_pods.go:89] "amd-gpu-device-plugin-jjc5z" [f828cfe8-480f-42a5-8e47-eb2a2e5f4a1e] Running
	I1205 18:47:02.975121  392706 system_pods.go:89] "coredns-7c65d6cfc9-zk8jj" [7adf42d9-14af-4e94-adae-b04af746e283] Running
	I1205 18:47:02.975132  392706 system_pods.go:89] "csi-hostpath-attacher-0" [7d308609-1109-41ee-919c-93fefc7b9d56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 18:47:02.975142  392706 system_pods.go:89] "csi-hostpath-resizer-0" [68b7b7f0-6085-4d0a-a17c-4a86015fa4ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 18:47:02.975153  392706 system_pods.go:89] "csi-hostpathplugin-6l6p5" [72ecf43c-9c33-4354-bb62-a25130b9ed65] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 18:47:02.975162  392706 system_pods.go:89] "etcd-ubuntu-20-agent-15" [662d8ffa-fc3a-41fc-a149-15e7136dc6ad] Running
	I1205 18:47:02.975169  392706 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-15" [6fdda757-9d19-4e58-a9de-3eb01f3c222d] Running
	I1205 18:47:02.975179  392706 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-15" [77968090-16a4-4af1-a253-1b9c1c84b83f] Running
	I1205 18:47:02.975185  392706 system_pods.go:89] "kube-proxy-469rp" [0f95cbc3-0d36-4d85-b1a3-3271dbb30d28] Running
	I1205 18:47:02.975194  392706 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-15" [e7f18375-954d-47e5-badf-a043eb4a045b] Running
	I1205 18:47:02.975205  392706 system_pods.go:89] "metrics-server-84c5f94fbc-4rstm" [dfef15df-0ac2-42d6-ae56-67fdb95b6a8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 18:47:02.975217  392706 system_pods.go:89] "nvidia-device-plugin-daemonset-ztwcn" [95079423-3a8c-43d2-af27-55852564e9ae] Running
	I1205 18:47:02.975229  392706 system_pods.go:89] "registry-66c9cd494c-jgf47" [9f55f79d-b172-464c-9881-382ccbd93912] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 18:47:02.975239  392706 system_pods.go:89] "registry-proxy-wl4vl" [5cf2fdd8-e0ad-481c-b4ee-4307a7236b36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 18:47:02.975248  392706 system_pods.go:89] "snapshot-controller-56fcc65765-ksj7l" [88702745-5bf8-4e07-a722-327cdbc69b9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 18:47:02.975260  392706 system_pods.go:89] "snapshot-controller-56fcc65765-v98wh" [fe983053-ea62-4dc3-9c0f-ecd39b63919e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 18:47:02.975270  392706 system_pods.go:89] "storage-provisioner" [b74c2937-c6b0-4e32-b3f8-b9b13659a848] Running
	I1205 18:47:02.975282  392706 system_pods.go:126] duration metric: took 205.521624ms to wait for k8s-apps to be running ...
	I1205 18:47:02.975295  392706 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 18:47:02.975356  392706 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I1205 18:47:02.990603  392706 system_svc.go:56] duration metric: took 15.292495ms WaitForService to wait for kubelet
	I1205 18:47:02.990640  392706 kubeadm.go:582] duration metric: took 14.07640579s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 18:47:02.990667  392706 node_conditions.go:102] verifying NodePressure condition ...
	I1205 18:47:03.108115  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:03.169970  392706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 18:47:03.170002  392706 node_conditions.go:123] node cpu capacity is 8
	I1205 18:47:03.170020  392706 node_conditions.go:105] duration metric: took 179.34669ms to run NodePressure ...
	I1205 18:47:03.170043  392706 start.go:241] waiting for startup goroutines ...
	I1205 18:47:03.463884  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:03.686072  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:03.964928  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:04.107991  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:04.464275  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:04.607849  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:04.964792  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:05.108046  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:05.465596  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:05.608363  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:05.965279  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:06.107814  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:06.464838  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:06.610026  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:06.968972  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:07.108223  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:07.464758  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:07.680442  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:07.964953  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:08.113963  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:08.465378  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:08.608570  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:08.964880  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:09.108617  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:09.464583  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:09.608998  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:09.966051  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:10.107312  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 18:47:10.465094  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:10.608143  392706 kapi.go:107] duration metric: took 20.504365742s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 18:47:10.964472  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:11.465172  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:11.965113  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:12.464521  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:12.986720  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:13.465033  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:13.973776  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:14.464463  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:14.965448  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:15.464973  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:15.987260  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:16.465299  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:16.964389  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:17.465827  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:17.963594  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:18.465151  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:18.964635  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:19.465520  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:19.965957  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:20.464529  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:20.965098  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:21.464325  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:21.964858  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:22.464236  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:23.024941  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:23.464410  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:23.966106  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:24.465553  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:24.978106  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:25.465212  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:25.965534  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 18:47:26.464843  392706 kapi.go:107] duration metric: took 34.00519142s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 18:47:38.079615  392706 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 18:47:38.079645  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:38.579029  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:39.078461  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:39.579164  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:40.078509  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:40.579142  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:41.078837  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:41.579629  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:42.079368  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:42.578774  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:43.079590  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:43.578935  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:44.078438  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:44.579111  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:45.078429  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:45.579466  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:46.079197  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:46.578586  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:47.078792  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:47.578137  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:48.078503  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:48.578681  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:49.078311  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:49.578854  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:50.078476  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:50.577860  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:51.078428  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:51.578461  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:52.079127  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:52.578218  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:53.078698  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:53.579828  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:54.079310  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:54.578693  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:55.078902  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:55.578241  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:56.078879  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:56.579057  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:57.078280  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:57.578818  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:58.078572  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:58.579097  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:59.078187  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:47:59.578963  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:00.078946  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:00.580637  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:01.078266  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:01.578547  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:02.078808  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:02.580110  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:03.079414  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:03.578832  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:04.079228  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:04.578833  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:05.078002  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:05.578226  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:06.078959  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:06.578244  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:07.078639  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:07.579331  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:08.079225  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:08.578036  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:09.078692  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:09.579389  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:10.079380  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:10.579194  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:11.121304  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:11.578338  392706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 18:48:12.079264  392706 kapi.go:107] duration metric: took 1m15.504316026s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 18:48:12.081150  392706 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I1205 18:48:12.082748  392706 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 18:48:12.084069  392706 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 18:48:12.085580  392706 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, storage-provisioner, inspektor-gadget, yakd, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I1205 18:48:12.087098  392706 addons.go:510] duration metric: took 1m23.179912011s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass cloud-spanner metrics-server storage-provisioner inspektor-gadget yakd volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I1205 18:48:12.087159  392706 start.go:246] waiting for cluster config update ...
	I1205 18:48:12.087186  392706 start.go:255] writing updated cluster config ...
	I1205 18:48:12.087461  392706 exec_runner.go:51] Run: rm -f paused
	I1205 18:48:12.134843  392706 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 18:48:12.136952  392706 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Wed 2024-10-16 18:17:53 UTC, end at Thu 2024-12-05 18:54:13 UTC. --
	Dec 05 18:47:31 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:47:31Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Pulling from volcanosh/vc-scheduler"
	Dec 05 18:47:53 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:47:53.898387701Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=ee4d6794cceaad1c traceID=a9c325cca610479b2a1d37c8ac3f9081
	Dec 05 18:47:54 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:47:54.098694552Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=ee4d6794cceaad1c traceID=a9c325cca610479b2a1d37c8ac3f9081
	Dec 05 18:47:54 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:47:54.100453689Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=ee4d6794cceaad1c traceID=a9c325cca610479b2a1d37c8ac3f9081
	Dec 05 18:48:00 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67f19b0eaf6c2313b8891949cc88c86bd823a48b76f8ee6e58b250fdd30337d6/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 05 18:48:00 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/082309ae8953784f62149319fa1a2c3c6ecdf57ca123adc5d9481774d8f83ef1/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 05 18:48:00 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:00.224318012Z" level=warning msg="reference for unknown type: " digest="sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f" remote="registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f" spanID=546ae4ccd364f69d traceID=049b34ed9a1a4f532ef43f3545b9166e
	Dec 05 18:48:01 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:01Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f"
	Dec 05 18:48:01 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:01Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f: Status: Image is up to date for registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f"
	Dec 05 18:48:01 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:01.349242433Z" level=info msg="ignoring event" container=676b89476343e14831156b16216ea7c8ac2802cea18a71bdfe50fa6ac92ab5f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 05 18:48:01 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:01.384561184Z" level=info msg="ignoring event" container=de60c0d4958cd572f7b2dc193ee52a4463a8290563a3a520b8a7a20ab323c685 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 05 18:48:02 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:02.602121861Z" level=info msg="ignoring event" container=082309ae8953784f62149319fa1a2c3c6ecdf57ca123adc5d9481774d8f83ef1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 05 18:48:02 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:02.618810830Z" level=info msg="ignoring event" container=67f19b0eaf6c2313b8891949cc88c86bd823a48b76f8ee6e58b250fdd30337d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 05 18:48:09 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4fc041be94946e82a9b10a3aea51c30f3c669f98ae9e731258e1563644663770/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 05 18:48:09 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:09.990452188Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" spanID=188720d6769e3297 traceID=039b4b4f5544fd290559326ba2b5ff7e
	Dec 05 18:48:10 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:48:10Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Dec 05 18:48:37 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:37.900647292Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=d0170cc20eec9a0b traceID=5d091e2e4955624a35c585db4040c34a
	Dec 05 18:48:38 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:38.083244338Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=d0170cc20eec9a0b traceID=5d091e2e4955624a35c585db4040c34a
	Dec 05 18:48:38 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:48:38.084827787Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=d0170cc20eec9a0b traceID=5d091e2e4955624a35c585db4040c34a
	Dec 05 18:50:05 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:50:05.889632268Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=8009669076fa0948 traceID=6fdd82d9d2011f598526fa2414b7d736
	Dec 05 18:50:06 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:50:06.261161558Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=8009669076fa0948 traceID=6fdd82d9d2011f598526fa2414b7d736
	Dec 05 18:50:06 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:50:06Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Pulling from volcanosh/vc-scheduler"
	Dec 05 18:52:50 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:52:50.905746289Z" level=warning msg="reference for unknown type: " digest="sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" remote="docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882" spanID=f45986b4ea198c64 traceID=0e66c903af9022486cfd0106a9d632c4
	Dec 05 18:52:51 ubuntu-20-agent-15 dockerd[392922]: time="2024-12-05T18:52:51.239367759Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" spanID=f45986b4ea198c64 traceID=0e66c903af9022486cfd0106a9d632c4
	Dec 05 18:52:51 ubuntu-20-agent-15 cri-dockerd[393251]: time="2024-12-05T18:52:51Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Pulling from volcanosh/vc-scheduler"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	99954234e5be8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 6 minutes ago       Running             gcp-auth                                 0                   4fc041be94946       gcp-auth-c684cb797-s7lbj
	502412b20138e       volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e                                         6 minutes ago       Running             admission                                0                   47f6ecd2fd2be       volcano-admission-5874dfdd79-2cwr4
	f2f93cf204722       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   d007039d62135       csi-hostpathplugin-6l6p5
	cbd4521542b0f       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          6 minutes ago       Running             csi-provisioner                          0                   d007039d62135       csi-hostpathplugin-6l6p5
	00d4d23edd666       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            6 minutes ago       Running             liveness-probe                           0                   d007039d62135       csi-hostpathplugin-6l6p5
	c3375237da024       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           6 minutes ago       Running             hostpath                                 0                   d007039d62135       csi-hostpathplugin-6l6p5
	10377173e4c11       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                6 minutes ago       Running             node-driver-registrar                    0                   d007039d62135       csi-hostpathplugin-6l6p5
	ab7ccb31799a0       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             6 minutes ago       Running             csi-attacher                             0                   ad8fe40345042       csi-hostpath-attacher-0
	ebced389a49c8       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              6 minutes ago       Running             csi-resizer                              0                   de2784bf24913       csi-hostpath-resizer-0
	4c8cb924caa41       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   6 minutes ago       Running             csi-external-health-monitor-controller   0                   d007039d62135       csi-hostpathplugin-6l6p5
	ddb1f42ada997       volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de                                      6 minutes ago       Running             volcano-controllers                      0                   23f6075945481       volcano-controllers-789ffc5785-6tdfl
	a9de8587faae4       volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e                                         6 minutes ago       Exited              main                                     0                   035e9234dea47       volcano-admission-init-qp6tk
	579f9bc167e81       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      7 minutes ago       Running             volume-snapshot-controller               0                   cad6469ff0947       snapshot-controller-56fcc65765-v98wh
	30c8352c57c4a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      7 minutes ago       Running             volume-snapshot-controller               0                   b58889c3536e7       snapshot-controller-56fcc65765-ksj7l
	b2a3a5cff703d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        7 minutes ago       Running             yakd                                     0                   3918625261f2b       yakd-dashboard-67d98fc6b-2nsqg
	f903738f9cd99       gcr.io/k8s-minikube/kube-registry-proxy@sha256:60ab3508367ad093b4b891231572577371a29f838d61e64d7f7d093d961c862c                              7 minutes ago       Running             registry-proxy                           0                   ee6edbcbea4f4       registry-proxy-wl4vl
	dcb5aa8fb0b33       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:102216c464091f4d9e07d825eba0b681f0d7e0ce108957028443441d3843d1fa                            7 minutes ago       Running             gadget                                   0                   ae87f8c81e4b8       gadget-c4wk4
	6e15d539ba115       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        7 minutes ago       Running             metrics-server                           0                   0116f3d10d3da       metrics-server-84c5f94fbc-4rstm
	6c02b5456a0e4       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             7 minutes ago       Running             registry                                 0                   8a35bf838b64b       registry-66c9cd494c-jgf47
	db086619a6500       gcr.io/cloud-spanner-emulator/emulator@sha256:8fae494dce81f5167703b16f943dda76109195b8fc06bad1f3e952fe90a0b8d0                               7 minutes ago       Running             cloud-spanner-emulator                   0                   96b55fe76be85       cloud-spanner-emulator-dc5db94f4-6mw9g
	2c2e0de240cac       nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea                                     7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ae9e5c418a252       nvidia-device-plugin-daemonset-ztwcn
	9f3e9cc9dc1ec       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               7 minutes ago       Running             amd-gpu-device-plugin                    0                   1b6ba3cd604d5       amd-gpu-device-plugin-jjc5z
	22de0a82d160f       6e38f40d628db                                                                                                                                7 minutes ago       Running             storage-provisioner                      0                   f53537cc8a4fd       storage-provisioner
	39f84d6805255       c69fa2e9cbf5f                                                                                                                                7 minutes ago       Running             coredns                                  0                   29344feaa26d6       coredns-7c65d6cfc9-zk8jj
	91ca714db528f       505d571f5fd56                                                                                                                                7 minutes ago       Running             kube-proxy                               0                   b83bcdf0410aa       kube-proxy-469rp
	0b56e4852737c       0486b6c53a1b5                                                                                                                                7 minutes ago       Running             kube-controller-manager                  0                   c10b76dd139f6       kube-controller-manager-ubuntu-20-agent-15
	6201519c962ce       9499c9960544e                                                                                                                                7 minutes ago       Running             kube-apiserver                           0                   8a320a6b85cc0       kube-apiserver-ubuntu-20-agent-15
	26cdc8676d8c4       2e96e5913fc06                                                                                                                                7 minutes ago       Running             etcd                                     0                   4ca7309d93060       etcd-ubuntu-20-agent-15
	4903d046814cf       847c7bc1a5418                                                                                                                                7 minutes ago       Running             kube-scheduler                           0                   12d973ebe5bdc       kube-scheduler-ubuntu-20-agent-15
	
	
	==> coredns [39f84d680525] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55626 - 2281 "HINFO IN 8470499205607441759.8113813181506174110. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025924016s
	[INFO] 10.244.0.24:39724 - 27835 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000340434s
	[INFO] 10.244.0.24:40270 - 3006 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000426447s
	[INFO] 10.244.0.24:48655 - 2168 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014034s
	[INFO] 10.244.0.24:43600 - 28199 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000251882s
	[INFO] 10.244.0.24:58879 - 23980 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154676s
	[INFO] 10.244.0.24:56351 - 38591 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000216233s
	[INFO] 10.244.0.24:50110 - 28108 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004434318s
	[INFO] 10.244.0.24:58329 - 40405 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004535187s
	[INFO] 10.244.0.24:51710 - 17754 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003445827s
	[INFO] 10.244.0.24:53740 - 24103 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004199713s
	[INFO] 10.244.0.24:49876 - 51575 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004762414s
	[INFO] 10.244.0.24:59094 - 11297 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004866412s
	[INFO] 10.244.0.24:42394 - 21340 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00252329s
	[INFO] 10.244.0.24:51465 - 3087 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002556127s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-15
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-15
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T18_46_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-15
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-15"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 18:46:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-15
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 18:54:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 18:53:21 +0000   Thu, 05 Dec 2024 18:46:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 18:53:21 +0000   Thu, 05 Dec 2024 18:46:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 18:53:21 +0000   Thu, 05 Dec 2024 18:46:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 18:53:21 +0000   Thu, 05 Dec 2024 18:46:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.128.15.240
	  Hostname:    ubuntu-20-agent-15
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                b37db8a4-1476-dab1-7f0f-0d5cfb4ed197
	  Boot ID:                    39024a98-8447-46b2-bbc5-7915429b9c2d
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-dc5db94f4-6mw9g        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  gadget                      gadget-c4wk4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  gcp-auth                    gcp-auth-c684cb797-s7lbj                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 amd-gpu-device-plugin-jjc5z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 coredns-7c65d6cfc9-zk8jj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m24s
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 csi-hostpathplugin-6l6p5                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 etcd-ubuntu-20-agent-15                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m31s
	  kube-system                 kube-apiserver-ubuntu-20-agent-15             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 kube-controller-manager-ubuntu-20-agent-15    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-proxy-469rp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-scheduler-ubuntu-20-agent-15             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 metrics-server-84c5f94fbc-4rstm               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m23s
	  kube-system                 nvidia-device-plugin-daemonset-ztwcn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 registry-66c9cd494c-jgf47                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 registry-proxy-wl4vl                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 snapshot-controller-56fcc65765-ksj7l          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 snapshot-controller-56fcc65765-v98wh          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  volcano-system              volcano-admission-5874dfdd79-2cwr4            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  volcano-system              volcano-controllers-789ffc5785-6tdfl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  volcano-system              volcano-scheduler-6c9778cbdf-q7mcw            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-2nsqg                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m22s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 7m35s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m35s (x3 over 7m35s)  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m35s (x3 over 7m35s)  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m35s (x2 over 7m35s)  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m35s                  kubelet          Starting kubelet.
	  Normal   Starting                 7m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m30s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  7m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m30s                  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m30s                  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m30s                  kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m25s                  node-controller  Node ubuntu-20-agent-15 event: Registered Node ubuntu-20-agent-15 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 eb 2d e1 6f 64 08 06
	[  +4.094712] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 81 fa 1e ea 45 08 06
	[  +0.026007] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 c2 2e aa 1f 86 08 06
	[  +2.419586] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 8e 76 69 41 3d 08 06
	[  +1.529031] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 8a 32 b0 19 72 08 06
	[  +4.766061] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa d6 3c bd 28 cc 08 06
	[  +0.198545] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 7b bb 99 2c 27 08 06
	[  +0.085629] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 9e 70 2f 37 72 08 06
	[  +3.221932] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 0b dc 0c bd 9c 08 06
	[Dec 5 18:48] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 d2 53 31 68 c8 08 06
	[  +0.027581] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 07 9b d6 30 b0 08 06
	[  +9.711033] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 d7 da 78 3f 8e 08 06
	[  +0.000509] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 a6 3b d0 14 e9 08 06
	
	
	==> etcd [26cdc8676d8c] <==
	{"level":"info","ts":"2024-12-05T18:46:39.451936Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.128.15.240:2380"}
	{"level":"info","ts":"2024-12-05T18:46:39.452152Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"13f0e7e2a3d8cc98","initial-advertise-peer-urls":["https://10.128.15.240:2380"],"listen-peer-urls":["https://10.128.15.240:2380"],"advertise-client-urls":["https://10.128.15.240:2379"],"listen-client-urls":["https://10.128.15.240:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T18:46:39.452185Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T18:46:40.338533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T18:46:40.338584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T18:46:40.338625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgPreVoteResp from 13f0e7e2a3d8cc98 at term 1"}
	{"level":"info","ts":"2024-12-05T18:46:40.338641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T18:46:40.338647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgVoteResp from 13f0e7e2a3d8cc98 at term 2"}
	{"level":"info","ts":"2024-12-05T18:46:40.338656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T18:46:40.338663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 13f0e7e2a3d8cc98 elected leader 13f0e7e2a3d8cc98 at term 2"}
	{"level":"info","ts":"2024-12-05T18:46:40.339755Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"13f0e7e2a3d8cc98","local-member-attributes":"{Name:ubuntu-20-agent-15 ClientURLs:[https://10.128.15.240:2379]}","request-path":"/0/members/13f0e7e2a3d8cc98/attributes","cluster-id":"3112ce273fbe8262","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T18:46:40.339757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T18:46:40.339798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T18:46:40.339793Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T18:46:40.339956Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T18:46:40.339983Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T18:46:40.340514Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3112ce273fbe8262","local-member-id":"13f0e7e2a3d8cc98","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T18:46:40.340589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T18:46:40.340623Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T18:46:40.340875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T18:46:40.341044Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T18:46:40.341808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.240:2379"}
	{"level":"info","ts":"2024-12-05T18:46:40.341826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T18:46:57.391036Z","caller":"traceutil/trace.go:171","msg":"trace[446777825] transaction","detail":"{read_only:false; response_revision:851; number_of_response:1; }","duration":"133.291291ms","start":"2024-12-05T18:46:57.257719Z","end":"2024-12-05T18:46:57.391010Z","steps":["trace[446777825] 'process raft request'  (duration: 88.877653ms)","trace[446777825] 'compare'  (duration: 44.152853ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T18:46:57.700377Z","caller":"traceutil/trace.go:171","msg":"trace[1301265378] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"100.432509ms","start":"2024-12-05T18:46:57.599912Z","end":"2024-12-05T18:46:57.700344Z","steps":["trace[1301265378] 'process raft request'  (duration: 50.309812ms)","trace[1301265378] 'compare'  (duration: 49.976695ms)"],"step_count":2}
	
	
	==> gcp-auth [99954234e5be] <==
	2024/12/05 18:48:11 GCP Auth Webhook started!
	
	
	==> kernel <==
	 18:54:13 up  1:36,  0 users,  load average: 0.07, 0.62, 1.35
	Linux ubuntu-20-agent-15 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [6201519c962c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1205 18:47:07.564392       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.101.228:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.101.228:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.101.228:443: connect: connection refused" logger="UnhandledError"
	I1205 18:47:07.600873       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 18:47:11.575978       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:11.576028       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	W1205 18:47:11.577730       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
	W1205 18:47:11.591561       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:11.591603       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	W1205 18:47:11.593438       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
	W1205 18:47:17.107602       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:17.107653       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	W1205 18:47:17.110309       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
	W1205 18:47:27.585832       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:27.585872       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	W1205 18:47:27.587632       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
	W1205 18:47:27.599415       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:27.599450       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	W1205 18:47:27.601115       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.247.101:443: connect: connection refused
	W1205 18:47:37.596163       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:37.596204       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	W1205 18:47:59.596880       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:59.596955       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	W1205 18:47:59.607647       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.57.204:443: connect: connection refused
	E1205 18:47:59.607693       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.57.204:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [0b56e4852737] <==
	I1205 18:48:01.487469       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1205 18:48:02.659941       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1205 18:48:02.670266       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1205 18:48:03.666408       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1205 18:48:03.673223       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1205 18:48:03.676826       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1205 18:48:03.678592       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1205 18:48:03.684199       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1205 18:48:03.689366       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1205 18:48:07.803227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="77.134µs"
	I1205 18:48:11.676623       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="7.494387ms"
	I1205 18:48:11.676955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="101.368µs"
	I1205 18:48:15.352050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-15"
	I1205 18:48:22.801022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="78.72µs"
	I1205 18:48:33.014786       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1205 18:48:33.016586       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1205 18:48:33.040474       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1205 18:48:33.041705       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1205 18:48:53.802710       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="72.863µs"
	I1205 18:49:04.800458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="81.156µs"
	I1205 18:50:20.802182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="89.496µs"
	I1205 18:50:32.799262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="64.743µs"
	I1205 18:53:04.800652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="133.078µs"
	I1205 18:53:18.798799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-6c9778cbdf" duration="104.187µs"
	I1205 18:53:21.852809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-15"
	
	
	==> kube-proxy [91ca714db528] <==
	I1205 18:46:50.764217       1 server_linux.go:66] "Using iptables proxy"
	I1205 18:46:51.029933       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.240"]
	E1205 18:46:51.030011       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 18:46:51.085295       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 18:46:51.085462       1 server_linux.go:169] "Using iptables Proxier"
	I1205 18:46:51.098201       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 18:46:51.098652       1 server.go:483] "Version info" version="v1.31.2"
	I1205 18:46:51.098681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 18:46:51.101546       1 config.go:199] "Starting service config controller"
	I1205 18:46:51.101578       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 18:46:51.101621       1 config.go:105] "Starting endpoint slice config controller"
	I1205 18:46:51.101629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 18:46:51.102260       1 config.go:328] "Starting node config controller"
	I1205 18:46:51.102277       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 18:46:51.205823       1 shared_informer.go:320] Caches are synced for node config
	I1205 18:46:51.205884       1 shared_informer.go:320] Caches are synced for service config
	I1205 18:46:51.205915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4903d046814c] <==
	W1205 18:46:41.222859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 18:46:41.222861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 18:46:41.222884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:41.222901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 18:46:41.222887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1205 18:46:41.222920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:41.222903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 18:46:41.222971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:41.223111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 18:46:41.223131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 18:46:41.223133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1205 18:46:41.223148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:42.044450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 18:46:42.044512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:42.077453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 18:46:42.077502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:42.091145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 18:46:42.091188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:42.148431       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 18:46:42.148473       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 18:46:42.229431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 18:46:42.229478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 18:46:42.404340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 18:46:42.404394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 18:46:44.720967       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Wed 2024-10-16 18:17:53 UTC, end at Thu 2024-12-05 18:54:13 UTC. --
	Dec 05 18:49:53 ubuntu-20-agent-15 kubelet[394162]: E1205 18:49:53.792545  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.264473  394162 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.264539  394162 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.264659  394162 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-a
pi-access-4bz59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-6c9778cbdf-q7mcw_volcano-system(33f5e98f-fb04-4f70-b72c-d223e4812765): ErrImagePull: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 05 18:50:06 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:06.265892  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:50:20 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:20.792295  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:50:32 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:32.791145  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:50:44 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:44.791249  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:50:59 ubuntu-20-agent-15 kubelet[394162]: E1205 18:50:59.790854  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:51:14 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:14.791751  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:51:29 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:29.791295  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:51:44 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:44.791260  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:51:55 ubuntu-20-agent-15 kubelet[394162]: E1205 18:51:55.791543  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:52:09 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:09.791587  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:52:21 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:21.791765  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:52:36 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:36.791000  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.242173  394162 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.242236  394162 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.242384  394162 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-a
pi-access-4bz59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-6c9778cbdf-q7mcw_volcano-system(33f5e98f-fb04-4f70-b72c-d223e4812765): ErrImagePull: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 05 18:52:51 ubuntu-20-agent-15 kubelet[394162]: E1205 18:52:51.243602  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:53:04 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:04.792141  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:53:18 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:18.791311  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:53:32 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:32.791588  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:53:45 ubuntu-20-agent-15 kubelet[394162]: E1205 18:53:45.794611  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	Dec 05 18:54:00 ubuntu-20-agent-15 kubelet[394162]: E1205 18:54:00.791353  394162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-6c9778cbdf-q7mcw" podUID="33f5e98f-fb04-4f70-b72c-d223e4812765"
	
	
	==> storage-provisioner [22de0a82d160] <==
	I1205 18:46:51.457275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 18:46:51.470453       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 18:46:51.470538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 18:46:51.478719       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 18:46:51.478966       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_2d25f92c-bf4b-417f-8537-28fee34ab274!
	I1205 18:46:51.480823       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90706887-4296-427b-b150-294488763ac5", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-15_2d25f92c-bf4b-417f-8537-28fee34ab274 became leader
	I1205 18:46:51.580193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_2d25f92c-bf4b-417f-8537-28fee34ab274!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw: exit status 1 (63.386828ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "volcano-admission-init-qp6tk" not found
	Error from server (NotFound): pods "volcano-scheduler-6c9778cbdf-q7mcw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-init-qp6tk volcano-scheduler-6c9778cbdf-q7mcw: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.707642605s)
--- FAIL: TestAddons/serial/Volcano (372.88s)

                                                
                                    

Test pass (105/169)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.95
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.31.2/json-events 0.97
15 TestDownloadOnly/v1.31.2/binaries 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
22 TestOffline 40.71
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 101.42
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.52
35 TestAddons/parallel/Registry 13.82
37 TestAddons/parallel/InspektorGadget 10.45
38 TestAddons/parallel/MetricsServer 5.43
40 TestAddons/parallel/CSI 65.81
41 TestAddons/parallel/Headlamp 15.98
42 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/NvidiaDevicePlugin 6.26
45 TestAddons/parallel/Yakd 10.55
47 TestAddons/StoppedEnableDisable 10.69
49 TestCertExpiration 227.4
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 30.73
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 28.04
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 37.14
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 0.85
72 TestFunctional/serial/LogsFileCmd 0.91
73 TestFunctional/serial/InvalidService 4.75
75 TestFunctional/parallel/ConfigCmd 0.31
76 TestFunctional/parallel/DashboardCmd 8.56
77 TestFunctional/parallel/DryRun 0.17
78 TestFunctional/parallel/InternationalLanguage 0.1
79 TestFunctional/parallel/StatusCmd 0.45
82 TestFunctional/parallel/ProfileCmd/profile_not_create 0.24
83 TestFunctional/parallel/ProfileCmd/profile_list 0.21
84 TestFunctional/parallel/ProfileCmd/profile_json_output 0.22
86 TestFunctional/parallel/ServiceCmd/DeployApp 9.15
87 TestFunctional/parallel/ServiceCmd/List 0.34
88 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
89 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
90 TestFunctional/parallel/ServiceCmd/Format 0.16
91 TestFunctional/parallel/ServiceCmd/URL 0.16
92 TestFunctional/parallel/ServiceCmdConnect 8.32
93 TestFunctional/parallel/AddonsCmd 0.13
94 TestFunctional/parallel/PersistentVolumeClaim 20.67
107 TestFunctional/parallel/MySQL 22.42
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.39
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.39
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.41
122 TestFunctional/parallel/License 0.16
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.02
125 TestFunctional/delete_minikube_cached_images 0.02
130 TestImageBuild/serial/Setup 14.47
131 TestImageBuild/serial/NormalBuild 0.96
132 TestImageBuild/serial/BuildWithBuildArg 0.64
133 TestImageBuild/serial/BuildWithDockerIgnore 0.4
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
138 TestJSONOutput/start/Command 27.19
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.53
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.42
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 5.33
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.22
166 TestMainNoArgs 0.05
167 TestMinikubeProfile 34.81
175 TestPause/serial/Start 28.78
176 TestPause/serial/SecondStartNoReconfiguration 24.95
177 TestPause/serial/Pause 0.52
178 TestPause/serial/VerifyStatus 0.14
179 TestPause/serial/Unpause 0.43
180 TestPause/serial/PauseAgain 0.58
181 TestPause/serial/DeletePaused 1.64
182 TestPause/serial/VerifyDeletedResources 0.07
196 TestRunningBinaryUpgrade 67.38
198 TestStoppedBinaryUpgrade/Setup 0.7
199 TestStoppedBinaryUpgrade/Upgrade 50.47
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
201 TestKubernetesUpgrade 315.64
x
+
TestDownloadOnly/v1.20.0/json-events (1.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.953723452s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (73.084055ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 18:45:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 18:45:45.472207  389003 out.go:345] Setting OutFile to fd 1 ...
	I1205 18:45:45.472349  389003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 18:45:45.472359  389003 out.go:358] Setting ErrFile to fd 2...
	I1205 18:45:45.472364  389003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 18:45:45.472570  389003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-381606/.minikube/bin
	W1205 18:45:45.472724  389003 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20052-381606/.minikube/config/config.json: open /home/jenkins/minikube-integration/20052-381606/.minikube/config/config.json: no such file or directory
	I1205 18:45:45.473409  389003 out.go:352] Setting JSON to true
	I1205 18:45:45.474380  389003 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5294,"bootTime":1733419051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 18:45:45.474509  389003 start.go:139] virtualization: kvm guest
	I1205 18:45:45.477304  389003 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1205 18:45:45.477448  389003 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-381606/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 18:45:45.477540  389003 notify.go:220] Checking for updates...
	I1205 18:45:45.479039  389003 out.go:169] MINIKUBE_LOCATION=20052
	I1205 18:45:45.480746  389003 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 18:45:45.482275  389003 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	I1205 18:45:45.483780  389003 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	I1205 18:45:45.485540  389003 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (0.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.2/json-events (0.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
--- PASS: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (70.449922ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC | 05 Dec 24 18:45 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 05 Dec 24 18:45 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 18:45:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 18:45:47.779395  389156 out.go:345] Setting OutFile to fd 1 ...
	I1205 18:45:47.779513  389156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 18:45:47.779521  389156 out.go:358] Setting ErrFile to fd 2...
	I1205 18:45:47.779525  389156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 18:45:47.779710  389156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-381606/.minikube/bin
	I1205 18:45:47.780360  389156 out.go:352] Setting JSON to true
	I1205 18:45:47.781392  389156 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5297,"bootTime":1733419051,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 18:45:47.781521  389156 start.go:139] virtualization: kvm guest
	I1205 18:45:47.783842  389156 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1205 18:45:47.784006  389156 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-381606/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 18:45:47.784066  389156 notify.go:220] Checking for updates...
	I1205 18:45:47.785472  389156 out.go:169] MINIKUBE_LOCATION=20052
	I1205 18:45:47.787113  389156 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 18:45:47.788878  389156 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	I1205 18:45:47.790478  389156 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	I1205 18:45:47.792058  389156 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 18:45:49.323189  388991 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:36049 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (40.71s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.027928502s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.686401988s)
--- PASS: TestOffline (40.71s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (54.505765ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (53.222185ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (101.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=none --bootstrapper=kubeadm: (1m41.42196979s)
--- PASS: TestAddons/Setup (101.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context minikube create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context minikube create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [686273ed-2b22-412e-853d-a49257610ea2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [686273ed-2b22-412e-853d-a49257610ea2] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004715842s
addons_test.go:633: (dbg) Run:  kubectl --context minikube exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context minikube describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context minikube exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.501602ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jgf47" [9f55f79d-b172-464c-9881-382ccbd93912] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005123237s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wl4vl" [5cf2fdd8-e0ad-481c-b4ee-4307a7236b36] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005023258s
addons_test.go:331: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.306225181s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/12/05 18:54:54 [DEBUG] GET http://10.128.15.240:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.45s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-c4wk4" [bbe6e9b6-5908-4277-949b-0739954e7b08] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004370097s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable inspektor-gadget --alsologtostderr -v=1: (5.444071035s)
--- PASS: TestAddons/parallel/InspektorGadget (10.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.222682ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4rstm" [dfef15df-0ac2-42d6-ae56-67fdb95b6a8f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004072554s
addons_test.go:402: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.43s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I1205 18:55:10.539061  388991 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 18:55:10.543216  388991 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 18:55:10.543241  388991 kapi.go:107] duration metric: took 4.193695ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.201906ms
addons_test.go:491: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a68e456d-599d-44ee-a6a2-cbd024187a7a] Pending
helpers_test.go:344: "task-pv-pod" [a68e456d-599d-44ee-a6a2-cbd024187a7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a68e456d-599d-44ee-a6a2-cbd024187a7a] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.005513261s
addons_test.go:511: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [18acb164-0e0a-4719-9889-3543ef2b7cad] Pending
helpers_test.go:344: "task-pv-pod-restore" [18acb164-0e0a-4719-9889-3543ef2b7cad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [18acb164-0e0a-4719-9889-3543ef2b7cad] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004003274s
addons_test.go:553: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.366240144s)
--- PASS: TestAddons/parallel/CSI (65.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-vpjcp" [f2ff7b28-40f6-4492-a9fd-2ddc647f4cb9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-vpjcp" [f2ff7b28-40f6-4492-a9fd-2ddc647f4cb9] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003688522s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.46698979s)
--- PASS: TestAddons/parallel/Headlamp (15.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-6mw9g" [05540b80-3746-4291-a21e-4111fbf40849] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004072654s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ztwcn" [95079423-3a8c-43d2-af27-55852564e9ae] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003821487s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2nsqg" [102a4c05-1428-4c2a-bd24-a57113b7e16b] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004455546s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.546584615s)
--- PASS: TestAddons/parallel/Yakd (10.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.358303724s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.69s)

                                                
                                    
x
+
TestCertExpiration (227.4s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.38274714s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.308654915s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.707191773s)
--- PASS: TestCertExpiration (227.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20052-381606/.minikube/files/etc/test/nested/copy/388991/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.727555666s)
--- PASS: TestFunctional/serial/StartWithProxy (30.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 19:01:24.467248  388991 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (28.036667643s)
functional_test.go:663: soft start took 28.037332001s for "minikube" cluster.
I1205 19:01:52.504252  388991 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (28.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.144492012s)
functional_test.go:761: restart took 37.144641339s for "minikube" cluster.
I1205 19:02:29.987608  388991 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (37.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1622068994/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (169.993701ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://10.128.15.240:32152 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.399978712s)
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (51.793448ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (48.651691ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/12/05 19:02:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 427351: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (85.545452ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:02:45.478587  427728 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:02:45.478837  427728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:45.478848  427728 out.go:358] Setting ErrFile to fd 2...
	I1205 19:02:45.478852  427728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:45.479018  427728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-381606/.minikube/bin
	I1205 19:02:45.479567  427728 out.go:352] Setting JSON to false
	I1205 19:02:45.480681  427728 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6314,"bootTime":1733419051,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:02:45.480777  427728 start.go:139] virtualization: kvm guest
	I1205 19:02:45.482995  427728 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:02:45.484423  427728 out.go:177]   - MINIKUBE_LOCATION=20052
	W1205 19:02:45.484394  427728 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-381606/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 19:02:45.484451  427728 notify.go:220] Checking for updates...
	I1205 19:02:45.487079  427728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:02:45.488596  427728 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	I1205 19:02:45.490026  427728 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	I1205 19:02:45.491373  427728 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:02:45.492692  427728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:02:45.494653  427728 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 19:02:45.495113  427728 exec_runner.go:51] Run: systemctl --version
	I1205 19:02:45.497756  427728 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:02:45.509388  427728 out.go:177] * Using the none driver based on existing profile
	I1205 19:02:45.510699  427728 start.go:297] selected driver: none
	I1205 19:02:45.510718  427728 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:02:45.510912  427728 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:02:45.510960  427728 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W1205 19:02:45.511219  427728 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I1205 19:02:45.513608  427728 out.go:201] 
	W1205 19:02:45.514896  427728 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 19:02:45.516048  427728 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (95.310354ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:02:45.656639  427759 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:02:45.656782  427759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:45.656793  427759 out.go:358] Setting ErrFile to fd 2...
	I1205 19:02:45.656800  427759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:45.657251  427759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-381606/.minikube/bin
	I1205 19:02:45.657888  427759 out.go:352] Setting JSON to false
	I1205 19:02:45.658962  427759 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6315,"bootTime":1733419051,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:02:45.659109  427759 start.go:139] virtualization: kvm guest
	I1205 19:02:45.661899  427759 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 19:02:45.664031  427759 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:02:45.664015  427759 notify.go:220] Checking for updates...
	W1205 19:02:45.664098  427759 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-381606/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 19:02:45.666810  427759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:02:45.668149  427759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	I1205 19:02:45.669565  427759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	I1205 19:02:45.670975  427759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:02:45.672173  427759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:02:45.673810  427759 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 19:02:45.674147  427759 exec_runner.go:51] Run: systemctl --version
	I1205 19:02:45.677103  427759 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:02:45.689727  427759 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I1205 19:02:45.690870  427759 start.go:297] selected driver: none
	I1205 19:02:45.690905  427759 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:02:45.691055  427759 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:02:45.691099  427759 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W1205 19:02:45.691387  427759 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I1205 19:02:45.693751  427759 out.go:201] 
	W1205 19:02:45.694941  427759 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 19:02:45.696216  427759 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "163.178217ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.611668ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "165.885345ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "51.044576ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-q96th" [6623cf2f-6f79-4269-afba-7a86d6ad7371] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-q96th" [6623cf2f-6f79-4269-afba-7a86d6ad7371] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.00439192s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "339.965995ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.128.15.240:30235
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.128.15.240:30235
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hqhfw" [db30fed1-7fcf-462d-af6b-9edcf2bd190a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hqhfw" [db30fed1-7fcf-462d-af6b-9edcf2bd190a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004178412s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.128.15.240:31257
functional_test.go:1675: http://10.128.15.240:31257: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-hqhfw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.128.15.240:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.128.15.240:31257
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [229c333c-df54-4e4b-bef4-c7573effe70c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004354832s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [95ecc258-4ac9-412f-a647-19587733cf18] Pending
helpers_test.go:344: "sp-pod" [95ecc258-4ac9-412f-a647-19587733cf18] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004078486s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [29febd74-2fa7-4724-af0c-04b54bdffa73] Pending
helpers_test.go:344: "sp-pod" [29febd74-2fa7-4724-af0c-04b54bdffa73] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [29febd74-2fa7-4724-af0c-04b54bdffa73] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004434857s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.67s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-m8svj" [fea77784-375d-41fe-990f-ee53850bdd9a] Pending
helpers_test.go:344: "mysql-6cdb49bbb-m8svj" [fea77784-375d-41fe-990f-ee53850bdd9a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-m8svj" [fea77784-375d-41fe-990f-ee53850bdd9a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003994977s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-m8svj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-m8svj -- mysql -ppassword -e "show databases;": exit status 1 (142.348081ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 19:03:43.624365  388991 retry.go:31] will retry after 1.149635423s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-m8svj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-m8svj -- mysql -ppassword -e "show databases;": exit status 1 (111.567985ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 19:03:44.886424  388991 retry.go:31] will retry after 942.641003ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-m8svj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-m8svj -- mysql -ppassword -e "show databases;": exit status 1 (112.95917ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 19:03:45.942678  388991 retry.go:31] will retry after 2.671592942s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-m8svj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.388175025s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.393461975s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.473616859s)
--- PASS: TestImageBuild/serial/Setup (14.47s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
--- PASS: TestImageBuild/serial/NormalBuild (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (27.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (27.186056244s)
--- PASS: TestJSONOutput/start/Command (27.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.328005265s)
--- PASS: TestJSONOutput/stop/Command (5.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.505952ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"086b8695-c1dc-4057-8318-23d9f2f53e14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4bc016f9-7b6d-489d-a532-e0966aa26368","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20052"}}
	{"specversion":"1.0","id":"2618ce37-89db-4f8f-899e-74fe85040a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ca917e4-116a-410b-a7f6-9000f3382e3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig"}}
	{"specversion":"1.0","id":"79174fa6-7f98-45ae-9de0-009edaaca67e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube"}}
	{"specversion":"1.0","id":"6022983c-fa40-4a8d-b1f6-a28909e1ce21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ee3b1b7c-7a44-47a5-b912-a7de7ce39ddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"61b51f63-ab1f-442a-b5f3-e6dcbf892947","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.657941725s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.190530433s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.32768205s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.81s)

                                                
                                    
x
+
TestPause/serial/Start (28.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.780408025s)
--- PASS: TestPause/serial/Start (28.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (24.948622139s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.95s)

                                                
                                    
x
+
TestPause/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (139.438286ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.58s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.58s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.6400689s)
--- PASS: TestPause/serial/DeletePaused (1.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.311274852 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.311274852 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (28.743349015s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.808619754s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.200121053s)
--- PASS: TestRunningBinaryUpgrade (67.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1024636674 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1024636674 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.043524152s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1024636674 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1024636674 -p minikube stop: (23.750776037s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.674872099s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.126993975s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.326575631s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (81.092039ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m17.038804882s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (77.991328ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-381606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-381606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.655878025s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.273929474s)
--- PASS: TestKubernetesUpgrade (315.64s)

                                                
                                    

Test skip (63/169)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.2/preload-exists 0
14 TestDownloadOnly/v1.31.2/cached-images 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
36 TestAddons/parallel/Ingress 0
39 TestAddons/parallel/Olm 0
43 TestAddons/parallel/LocalPath 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
48 TestCertOptions 0
50 TestDockerFlags 0
51 TestForceSystemdFlag 0
52 TestForceSystemdEnv 0
53 TestDockerEnvContainerd 0
54 TestKVMDriverInstallOrUpdate 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
57 TestErrorSpam 0
66 TestFunctional/serial/CacheCmd 0
80 TestFunctional/parallel/MountCmd 0
97 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
99 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.14
190 TestStartStop/group/newest-cni 0.14
191 TestStartStop/group/default-k8s-diff-port 0.14
192 TestStartStop/group/no-preload 0.14
193 TestStartStop/group/disable-driver-mounts 0.14
194 TestStartStop/group/embed-certs 0.14
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:193: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:882: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard