Test Report: none_Linux 19688

                    
                      911e4c99bab82008a0d80e5fa9ba6656b1cfd206:2024-09-23:36337
                    
                

Test fail (5/166)

Order failed test Duration
29 TestAddons/serial/Volcano 361.64
31 TestAddons/serial/GCPAuth/Namespaces 47.66
33 TestAddons/parallel/Registry 11.89
38 TestAddons/parallel/CSI 371.71
39 TestAddons/parallel/Headlamp 481.97
x
+
TestAddons/serial/Volcano (361.64s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 9.370659ms
addons_test.go:835: volcano-scheduler stabilized in 9.434515ms
addons_test.go:843: volcano-admission stabilized in 9.541285ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-79dc4b78bb-zdd4g" [710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-scheduler" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:857: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:857: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:857: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-23 10:48:44.721772643 +0000 UTC m=+817.103345572
addons_test.go:857: (dbg) Run:  kubectl --context minikube describe po volcano-scheduler-79dc4b78bb-zdd4g -n volcano-system
addons_test.go:857: (dbg) kubectl --context minikube describe po volcano-scheduler-79dc4b78bb-zdd4g -n volcano-system:
Name:                 volcano-scheduler-79dc4b78bb-zdd4g
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 ubuntu-20-agent-12/10.128.15.239
Start Time:           Mon, 23 Sep 2024 10:36:39 +0000
Labels:               app=volcano-scheduler
pod-template-hash=79dc4b78bb
Annotations:          <none>
Status:               Pending
IP:                   10.244.0.16
IPs:
IP:           10.244.0.16
Controlled By:  ReplicaSet/volcano-scheduler-79dc4b78bb
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f8qhf (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:          HostPath (bare host directory volume)
Path:          /tmp/klog-socks
HostPathType:  
kube-api-access-f8qhf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  12m                  default-scheduler  Successfully assigned volcano-system/volcano-scheduler-79dc4b78bb-zdd4g to ubuntu-20-agent-12
Normal   Pulling    10m (x4 over 12m)    kubelet            Pulling image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": no such image: "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
Warning  Failed     9m52s (x6 over 11m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    118s (x41 over 11m)  kubelet            Back-off pulling image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
addons_test.go:857: (dbg) Run:  kubectl --context minikube logs volcano-scheduler-79dc4b78bb-zdd4g -n volcano-system
addons_test.go:857: (dbg) Non-zero exit: kubectl --context minikube logs volcano-scheduler-79dc4b78bb-zdd4g -n volcano-system: exit status 1 (77.930232ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-79dc4b78bb-zdd4g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:857: kubectl --context minikube logs volcano-scheduler-79dc4b78bb-zdd4g -n volcano-system: exit status 1
addons_test.go:858: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:42273               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:36 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:42 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:36:19
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:36:19.158069 1588554 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:36:19.158231 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158241 1588554 out.go:358] Setting ErrFile to fd 2...
	I0923 10:36:19.158245 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158464 1588554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	I0923 10:36:19.159125 1588554 out.go:352] Setting JSON to false
	I0923 10:36:19.160039 1588554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":145130,"bootTime":1726942649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:36:19.160160 1588554 start.go:139] virtualization: kvm guest
	I0923 10:36:19.162394 1588554 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:36:19.163650 1588554 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:36:19.163676 1588554 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 10:36:19.163732 1588554 notify.go:220] Checking for updates...
	I0923 10:36:19.166389 1588554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:36:19.167804 1588554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:19.169081 1588554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 10:36:19.170968 1588554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:36:19.172507 1588554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:36:19.174424 1588554 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:36:19.185459 1588554 out.go:177] * Using the none driver based on user configuration
	I0923 10:36:19.186681 1588554 start.go:297] selected driver: none
	I0923 10:36:19.186694 1588554 start.go:901] validating driver "none" against <nil>
	I0923 10:36:19.186706 1588554 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:36:19.186759 1588554 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 10:36:19.187052 1588554 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0923 10:36:19.187561 1588554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:36:19.187804 1588554 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:19.187836 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:19.187883 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:19.187891 1588554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:36:19.187950 1588554 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:19.190491 1588554 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 10:36:19.192247 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.192296 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json: {Name:mk0db601d978f1f6b111e723fd0658218dee1a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:19.192505 1588554 start.go:360] acquireMachinesLock for minikube: {Name:mka47a0638fa8ca4d22f1fa46c51878d308fb6cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:36:19.192555 1588554 start.go:364] duration metric: took 26.854µs to acquireMachinesLock for "minikube"
	I0923 10:36:19.192576 1588554 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:19.192689 1588554 start.go:125] createHost starting for "" (driver="none")
	I0923 10:36:19.194985 1588554 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0923 10:36:19.196198 1588554 exec_runner.go:51] Run: systemctl --version
	I0923 10:36:19.198807 1588554 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0923 10:36:19.198844 1588554 client.go:168] LocalClient.Create starting
	I0923 10:36:19.198929 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem
	I0923 10:36:19.198967 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.198986 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199033 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem
	I0923 10:36:19.199052 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.199065 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199430 1588554 client.go:171] duration metric: took 577.868µs to LocalClient.Create
	I0923 10:36:19.199455 1588554 start.go:167] duration metric: took 651.01µs to libmachine.API.Create "minikube"
	I0923 10:36:19.199461 1588554 start.go:293] postStartSetup for "minikube" (driver="none")
	I0923 10:36:19.199503 1588554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:36:19.199539 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:36:19.209126 1588554 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:36:19.209149 1588554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:36:19.209157 1588554 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:36:19.210966 1588554 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0923 10:36:19.212083 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/addons for local assets ...
	I0923 10:36:19.212135 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/files for local assets ...
	I0923 10:36:19.212155 1588554 start.go:296] duration metric: took 12.687054ms for postStartSetup
	I0923 10:36:19.212795 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.212933 1588554 start.go:128] duration metric: took 20.232501ms to createHost
	I0923 10:36:19.212946 1588554 start.go:83] releasing machines lock for "minikube", held for 20.378727ms
	I0923 10:36:19.213290 1588554 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:36:19.213405 1588554 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0923 10:36:19.215275 1588554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:36:19.215410 1588554 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:36:19.225131 1588554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 10:36:19.225172 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.225207 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.225324 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.246269 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:36:19.256037 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:36:19.265994 1588554 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:36:19.266081 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:36:19.276368 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.286490 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:36:19.297389 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.307066 1588554 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:36:19.316656 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:36:19.326288 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:36:19.336363 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:36:19.346290 1588554 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:36:19.355338 1588554 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:36:19.364071 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:19.577952 1588554 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0923 10:36:19.651036 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.651102 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.651252 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.672247 1588554 exec_runner.go:51] Run: which cri-dockerd
	I0923 10:36:19.673216 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:36:19.681044 1588554 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0923 10:36:19.681067 1588554 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.681103 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.689425 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:36:19.689591 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4059772120 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.698668 1588554 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0923 10:36:19.932327 1588554 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0923 10:36:20.150083 1588554 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:36:20.150282 1588554 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0923 10:36:20.150300 1588554 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0923 10:36:20.150338 1588554 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0923 10:36:20.158569 1588554 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:36:20.158734 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2996454661 /etc/docker/daemon.json
	I0923 10:36:20.168354 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:20.379218 1588554 exec_runner.go:51] Run: sudo systemctl restart docker
	I0923 10:36:20.693080 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:36:20.705085 1588554 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0923 10:36:20.723552 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:20.735597 1588554 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:36:20.953725 1588554 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0923 10:36:21.177941 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.410173 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0923 10:36:21.423706 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:21.435794 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.688698 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0923 10:36:21.764452 1588554 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:36:21.764538 1588554 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0923 10:36:21.765977 1588554 start.go:563] Will wait 60s for crictl version
	I0923 10:36:21.766041 1588554 exec_runner.go:51] Run: which crictl
	I0923 10:36:21.767183 1588554 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0923 10:36:21.799990 1588554 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 10:36:21.800066 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.821449 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.845424 1588554 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 10:36:21.845506 1588554 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0923 10:36:21.848567 1588554 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0923 10:36:21.850015 1588554 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:36:21.850144 1588554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:36:21.850155 1588554 kubeadm.go:934] updating node { 10.128.15.239 8443 v1.31.1 docker true true} ...
	I0923 10:36:21.850253 1588554 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-12 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.239 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0923 10:36:21.850310 1588554 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0923 10:36:21.901691 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:21.901719 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:21.901730 1588554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:36:21.901755 1588554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.239 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-12 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:36:21.901910 1588554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-12"
	  kubeletExtraArgs:
	    node-ip: 10.128.15.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:36:21.901970 1588554 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.910706 1588554 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:36:21.910760 1588554 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.918867 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:36:21.918878 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:36:21.918874 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:36:21.919007 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:21.931740 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:36:21.973404 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2218285672 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:36:21.975632 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube621796612 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:36:22.005095 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3553074774 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:36:22.078082 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:36:22.087582 1588554 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0923 10:36:22.087606 1588554 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.087647 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.095444 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0923 10:36:22.095602 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4110124182 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.105645 1588554 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0923 10:36:22.105666 1588554 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0923 10:36:22.105700 1588554 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0923 10:36:22.113822 1588554 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:36:22.114022 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3324119727 /lib/systemd/system/kubelet.service
	I0923 10:36:22.123427 1588554 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0923 10:36:22.123598 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3318915681 /var/tmp/minikube/kubeadm.yaml.new
	I0923 10:36:22.131907 1588554 exec_runner.go:51] Run: grep 10.128.15.239	control-plane.minikube.internal$ /etc/hosts
	I0923 10:36:22.133649 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:22.363463 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:22.378439 1588554 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube for IP: 10.128.15.239
	I0923 10:36:22.378459 1588554 certs.go:194] generating shared ca certs ...
	I0923 10:36:22.378479 1588554 certs.go:226] acquiring lock for ca certs: {Name:mk757d3be8cf2fb32b8856d4b5e3173183901a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.378637 1588554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key
	I0923 10:36:22.378678 1588554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key
	I0923 10:36:22.378687 1588554 certs.go:256] generating profile certs ...
	I0923 10:36:22.378744 1588554 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key
	I0923 10:36:22.378763 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt with IP's: []
	I0923 10:36:22.592011 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt ...
	I0923 10:36:22.592085 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt: {Name:mk1bdb710d99b77b32099c81dc261479f881a61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592249 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key ...
	I0923 10:36:22.592262 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key: {Name:mk990e2a3a19cc03d4722edbfa635f5e467b2b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592353 1588554 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83
	I0923 10:36:22.592371 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.239]
	I0923 10:36:22.826429 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 ...
	I0923 10:36:22.826468 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83: {Name:mkdaa76b99a75fc999a744f15c5aa0e73646ad27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826632 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 ...
	I0923 10:36:22.826650 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83: {Name:mk5c84f7ccec239df3b3f71560e288a437b89d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826728 1588554 certs.go:381] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt
	I0923 10:36:22.826837 1588554 certs.go:385] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key
	I0923 10:36:22.826896 1588554 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key
	I0923 10:36:22.826913 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0923 10:36:22.988376 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt ...
	I0923 10:36:22.988415 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt: {Name:mk1a79d5dbe06be337e3230425d1c5cb0b5c9c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988572 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key ...
	I0923 10:36:22.988587 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key: {Name:mk7f2be748011aa06064cd625f3afbd5fec49aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988800 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:36:22.988842 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:36:22.988874 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:36:22.988896 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/key.pem (1675 bytes)
	I0923 10:36:22.989638 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:36:22.989763 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube32048499 /var/lib/minikube/certs/ca.crt
	I0923 10:36:22.999482 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 10:36:22.999627 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2462737595 /var/lib/minikube/certs/ca.key
	I0923 10:36:23.008271 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:36:23.008403 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2315409218 /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:36:23.016619 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:36:23.016796 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2778680620 /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:36:23.026283 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0923 10:36:23.026429 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563673913 /var/lib/minikube/certs/apiserver.crt
	I0923 10:36:23.034367 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:36:23.034559 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1327376112 /var/lib/minikube/certs/apiserver.key
	I0923 10:36:23.043236 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:36:23.043385 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3861098534 /var/lib/minikube/certs/proxy-client.crt
	I0923 10:36:23.053261 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:36:23.053393 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1865989171 /var/lib/minikube/certs/proxy-client.key
	I0923 10:36:23.062749 1588554 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0923 10:36:23.062771 1588554 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.062810 1588554 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.070407 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:36:23.070572 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2921020744 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.078922 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:36:23.079082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1931847277 /var/lib/minikube/kubeconfig
	I0923 10:36:23.087191 1588554 exec_runner.go:51] Run: openssl version
	I0923 10:36:23.090067 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:36:23.098811 1588554 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100243 1588554 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100280 1588554 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.103237 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:36:23.112696 1588554 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:36:23.113952 1588554 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:36:23.113993 1588554 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:23.114121 1588554 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:36:23.130863 1588554 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:36:23.141170 1588554 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:36:23.154896 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:23.177871 1588554 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:36:23.186183 1588554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:36:23.186207 1588554 kubeadm.go:157] found existing configuration files:
	
	I0923 10:36:23.186251 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:36:23.195211 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:36:23.195272 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:36:23.203608 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:36:23.212052 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:36:23.212118 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:36:23.220697 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.231762 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:36:23.231826 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.239886 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:36:23.250151 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:36:23.250215 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:36:23.257852 1588554 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:36:23.292982 1588554 kubeadm.go:310] W0923 10:36:23.292852 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.293485 1588554 kubeadm.go:310] W0923 10:36:23.293445 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.295381 1588554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:36:23.295429 1588554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:36:23.388509 1588554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:36:23.388613 1588554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:36:23.388622 1588554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:36:23.388626 1588554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:36:23.400110 1588554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:36:23.403660 1588554 out.go:235]   - Generating certificates and keys ...
	I0923 10:36:23.403706 1588554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:36:23.403719 1588554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:36:23.479635 1588554 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:36:23.612116 1588554 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:36:23.692069 1588554 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:36:23.926999 1588554 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:36:24.011480 1588554 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:36:24.011600 1588554 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.104614 1588554 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:36:24.104769 1588554 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.304540 1588554 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:36:24.538700 1588554 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:36:24.615897 1588554 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:36:24.616110 1588554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:36:24.791653 1588554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:36:24.910277 1588554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:36:25.215908 1588554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:36:25.289127 1588554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:36:25.490254 1588554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:36:25.490804 1588554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:36:25.493193 1588554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:36:25.495266 1588554 out.go:235]   - Booting up control plane ...
	I0923 10:36:25.495299 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:36:25.495318 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:36:25.495739 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:36:25.515279 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:36:25.519949 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:36:25.519979 1588554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:36:25.765044 1588554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:36:25.765080 1588554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:36:26.266756 1588554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690653ms
	I0923 10:36:26.266797 1588554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:36:31.268595 1588554 kubeadm.go:310] [api-check] The API server is healthy after 5.001820679s
	I0923 10:36:31.279620 1588554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:36:31.290992 1588554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:36:31.308130 1588554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:36:31.308158 1588554 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-12 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:36:31.315634 1588554 kubeadm.go:310] [bootstrap-token] Using token: vj37sq.3v8d1kp1945z41wj
	I0923 10:36:31.316963 1588554 out.go:235]   - Configuring RBAC rules ...
	I0923 10:36:31.317008 1588554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:36:31.320391 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:36:31.328142 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:36:31.330741 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:36:31.333381 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:36:31.335890 1588554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:36:31.675856 1588554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:36:32.106847 1588554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:36:32.674219 1588554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:36:32.675126 1588554 kubeadm.go:310] 
	I0923 10:36:32.675137 1588554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:36:32.675141 1588554 kubeadm.go:310] 
	I0923 10:36:32.675148 1588554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:36:32.675152 1588554 kubeadm.go:310] 
	I0923 10:36:32.675156 1588554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:36:32.675160 1588554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:36:32.675164 1588554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:36:32.675171 1588554 kubeadm.go:310] 
	I0923 10:36:32.675175 1588554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:36:32.675179 1588554 kubeadm.go:310] 
	I0923 10:36:32.675184 1588554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:36:32.675188 1588554 kubeadm.go:310] 
	I0923 10:36:32.675192 1588554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:36:32.675196 1588554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:36:32.675207 1588554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:36:32.675211 1588554 kubeadm.go:310] 
	I0923 10:36:32.675217 1588554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:36:32.675221 1588554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:36:32.675225 1588554 kubeadm.go:310] 
	I0923 10:36:32.675228 1588554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675233 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c \
	I0923 10:36:32.675237 1588554 kubeadm.go:310] 	--control-plane 
	I0923 10:36:32.675242 1588554 kubeadm.go:310] 
	I0923 10:36:32.675246 1588554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:36:32.675252 1588554 kubeadm.go:310] 
	I0923 10:36:32.675255 1588554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675258 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c 
	I0923 10:36:32.679087 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:32.679120 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:32.680982 1588554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:36:32.682253 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:36:32.692879 1588554 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:36:32.693059 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3098274276 /etc/cni/net.d/1-k8s.conflist
	I0923 10:36:32.704393 1588554 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:36:32.704473 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:32.704510 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-12 minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0923 10:36:32.713564 1588554 ops.go:34] apiserver oom_adj: -16
	I0923 10:36:32.777699 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.277929 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.778034 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.278552 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.777937 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.278677 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.777756 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.278547 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.343720 1588554 kubeadm.go:1113] duration metric: took 3.63930993s to wait for elevateKubeSystemPrivileges
	I0923 10:36:36.343761 1588554 kubeadm.go:394] duration metric: took 13.229771538s to StartCluster
	I0923 10:36:36.343783 1588554 settings.go:142] acquiring lock: {Name:mkf413d2c932a8f45f91708eee4886fc43a35e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.343846 1588554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:36.344451 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/kubeconfig: {Name:mk42cd91ee317759dd4ab26721004c644d4d46c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.344664 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:36:36.344755 1588554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:36:36.344891 1588554 addons.go:69] Setting yakd=true in profile "minikube"
	I0923 10:36:36.344910 1588554 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0923 10:36:36.344913 1588554 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0923 10:36:36.344939 1588554 addons.go:69] Setting registry=true in profile "minikube"
	I0923 10:36:36.344931 1588554 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0923 10:36:36.344946 1588554 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0923 10:36:36.344964 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0923 10:36:36.344976 1588554 addons.go:234] Setting addon registry=true in "minikube"
	I0923 10:36:36.344980 1588554 mustload.go:65] Loading cluster: minikube
	I0923 10:36:36.344979 1588554 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0923 10:36:36.344992 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.344990 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345000 1588554 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0923 10:36:36.345005 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345031 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345045 1588554 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0923 10:36:36.345072 1588554 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0923 10:36:36.345087 1588554 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0923 10:36:36.345088 1588554 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:36.345104 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345114 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345179 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345317 1588554 addons.go:69] Setting volcano=true in profile "minikube"
	I0923 10:36:36.345335 1588554 addons.go:234] Setting addon volcano=true in "minikube"
	I0923 10:36:36.345361 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345658 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345675 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345680 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345690 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345758 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345762 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345775 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345780 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345807 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345824 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345827 1588554 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0923 10:36:36.345827 1588554 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0923 10:36:36.344919 1588554 addons.go:234] Setting addon yakd=true in "minikube"
	I0923 10:36:36.345839 1588554 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0923 10:36:36.345843 1588554 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0923 10:36:36.344930 1588554 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0923 10:36:36.345858 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345860 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345874 1588554 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0923 10:36:36.345918 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345811 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346177 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346191 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346221 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346328 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346342 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346371 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346524 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346536 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346550 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.346579 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346655 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346673 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346705 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345810 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346539 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347192 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.347221 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347233 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347253 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347284 1588554 out.go:177] * Configuring local host environment ...
	I0923 10:36:36.345829 1588554 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0923 10:36:36.347650 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0923 10:36:36.348407 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.348430 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.348463 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0923 10:36:36.348690 1588554 out.go:270] * 
	W0923 10:36:36.348780 1588554 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0923 10:36:36.348809 1588554 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0923 10:36:36.348865 1588554 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0923 10:36:36.348897 1588554 out.go:270] * 
	W0923 10:36:36.348999 1588554 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0923 10:36:36.349040 1588554 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0923 10:36:36.349080 1588554 out.go:270] * 
	W0923 10:36:36.349130 1588554 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0923 10:36:36.349173 1588554 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0923 10:36:36.349199 1588554 out.go:270] * 
	W0923 10:36:36.349236 1588554 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0923 10:36:36.349282 1588554 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:36.345810 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.350050 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.350088 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.350710 1588554 out.go:177] * Verifying Kubernetes components...
	I0923 10:36:36.352239 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:36.369581 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.369720 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.370463 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.371382 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.373298 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.379392 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.383028 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385097 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385628 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.385693 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.389742 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.389782 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.389793 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402210 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402325 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402356 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402407 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402488 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402530 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402557 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402328 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.406952 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.406987 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.407339 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.407394 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.414599 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.414632 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.415393 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415455 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.415667 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415722 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.417736 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.417799 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.420551 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.420602 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.421969 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.421994 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.422984 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423319 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423344 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.424659 1588554 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:36:36.424874 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.424899 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.428268 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.428559 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.430071 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:36:36.430076 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:36:36.430207 1588554 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:36:36.431382 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431427 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:36:36.431585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube840197264 /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431790 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.431815 1588554 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:36:36.431987 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1728725482 /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.433518 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:36:36.434702 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:36:36.435367 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.435397 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.436902 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:36:36.438150 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:36:36.439277 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.439337 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.440540 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.440996 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:36:36.442010 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:36:36.442071 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.442098 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.442561 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.442772 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443079 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.443136 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443350 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.443375 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.443492 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443518 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0923 10:36:36.443525 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443566 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443844 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.443879 1588554 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:36:36.444008 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4228746672 /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.444580 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:36:36.446035 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:36:36.446930 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.446950 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.447416 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.448168 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448190 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.448643 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448661 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.449758 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 10:36:36.449765 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.449802 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:36:36.449942 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4141716628 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.452784 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.452686 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 10:36:36.454911 1588554 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:36:36.454973 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.455634 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.456554 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.457231 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.457268 1588554 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:36:36.457428 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1881343942 /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.458064 1588554 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458100 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:36:36.458238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2629288326 /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458427 1588554 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:36:36.458490 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 10:36:36.458554 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:36:36.458748 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.459583 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.459224 1588554 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0923 10:36:36.459875 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.459904 1588554 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.459934 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:36:36.460073 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172599530 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.460516 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.460548 1588554 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:36:36.460695 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1059056177 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.462006 1588554 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.462043 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 10:36:36.462614 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3721652212 /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.464913 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.464936 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.464972 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.467000 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.472480 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:36:36.473238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube726889991 /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.480760 1588554 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0923 10:36:36.480939 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.485106 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.485141 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.485190 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.487844 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.487878 1588554 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:36:36.488012 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601575597 /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.489111 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.491189 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.491220 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:36:36.491369 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3194307137 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.492639 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.492667 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494194 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.494218 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494867 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.498982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.499389 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.500765 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.500800 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:36:36.500956 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1985750997 /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.501929 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.503522 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.507731 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.507981 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:36:36.508221 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2102644874 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.508499 1588554 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:36:36.508667 1588554 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:36:36.509791 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.509885 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:36:36.510186 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.510211 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:36:36.510259 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510276 1588554 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:36:36.510535 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2223790766 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510687 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2284125594 /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.511165 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172030255 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.518843 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.518932 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.519210 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.519243 1588554 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:36:36.519417 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2081246003 /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.527052 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.530307 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.531182 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.531199 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.531224 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:36:36.531366 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2359416048 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.534852 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.534897 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.534862 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.534931 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.534930 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:36:36.534953 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:36:36.535115 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube169766603 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.535148 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube873661914 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.540683 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.547811 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:36:36.548029 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.548063 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:36:36.548238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube411864712 /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.553057 1588554 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:36:36.555188 1588554 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:36:36.555273 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.555312 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:36:36.555486 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4206261347 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.562063 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.562124 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:36:36.562318 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918834683 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.563155 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.563195 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:36:36.563361 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2570607285 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.568213 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.568257 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:36:36.568398 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393911802 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.571999 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.572033 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:36:36.572185 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2353575520 /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.577466 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.577543 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.587661 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.598560 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.598607 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:36:36.598954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2771751730 /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.603217 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.603313 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:36:36.603600 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4069496750 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.604133 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604165 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:36:36.604308 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1964334193 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604545 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.604574 1588554 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:36:36.604700 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2583663156 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.610522 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.610602 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.615633 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.616448 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.616504 1588554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616528 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0923 10:36:36.616540 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616587 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.633448 1588554 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.633487 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:36:36.633636 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3570026092 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.637790 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.637820 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:36:36.637954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2992782773 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.646982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.677372 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.679555 1588554 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:36:36.679857 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4202431507 /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.688839 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.688874 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:36:36.689001 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube389006966 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.693416 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.693456 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:36:36.693585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951849839 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.738946 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.774333 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.774371 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:36:36.774529 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1226040952 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.785891 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.785936 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:36:36.786131 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1330733841 /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.796363 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:36.807897 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.807939 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:36:36.808082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube111334727 /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.814837 1588554 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818242 1588554 node_ready.go:49] node "ubuntu-20-agent-12" has status "Ready":"True"
	I0923 10:36:36.818281 1588554 node_ready.go:38] duration metric: took 3.403871ms for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818293 1588554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:36.823705 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.828322 1588554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:36.832595 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.832627 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:36:36.832974 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1712125769 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.870153 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.870197 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:36:36.870386 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2973576979 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.926104 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.926143 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:36:36.926289 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2280122930 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.938896 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.938934 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:36:36.939070 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1690561903 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.950670 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:37.100928 1588554 addons.go:475] Verifying addon registry=true in "minikube"
	I0923 10:36:37.102814 1588554 out.go:177] * Verifying registry addon...
	I0923 10:36:37.112453 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:36:37.120259 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.187559 1588554 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0923 10:36:37.634285 1588554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.634317 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:37.695664 1588554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0923 10:36:37.724258 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.07719175s)
	I0923 10:36:37.724301 1588554 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0923 10:36:37.739850 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.124159231s)
	I0923 10:36:37.742561 1588554 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0923 10:36:37.849519 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.025767323s)
	I0923 10:36:38.120128 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.376349 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.69890606s)
	W0923 10:36:38.376406 1588554 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.376435 1588554 retry.go:31] will retry after 154.227647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.532717 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:38.617615 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.835917 1588554 pod_ready.go:103] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:39.116010 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.531492 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.580742626s)
	I0923 10:36:39.531534 1588554 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:39.537060 1588554 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:36:39.539558 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:36:39.547478 1588554 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:36:39.547508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:39.616393 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.677521 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.146291745s)
	I0923 10:36:40.048496 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.116802 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.545476 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.617107 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.834321 1588554 pod_ready.go:93] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:40.834347 1588554 pod_ready.go:82] duration metric: took 4.005994703s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:40.834359 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:41.044378 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.144560 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:41.351204 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.818400841s)
	I0923 10:36:41.545380 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.616429 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.044309 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.116963 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.545513 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.616637 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.841366 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:43.045300 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.116762 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:43.431875 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:36:43.432127 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3269004284 /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.445163 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:36:43.445319 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3403460145 /var/lib/minikube/google_cloud_project
	I0923 10:36:43.457431 1588554 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0923 10:36:43.457499 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:43.458127 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:43.458149 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:43.458181 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:43.479053 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:43.491340 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:43.491424 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:43.503388 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:43.503426 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:43.508517 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:43.508577 1588554 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.511610 1588554 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:36:43.513346 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:36:43.514725 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.514758 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:36:43.514881 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube616037526 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.525139 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.525184 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:36:43.525334 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3406397122 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.536623 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.536656 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:36:43.536845 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3654027324 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.544627 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.548001 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.616664 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.106662 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.245172 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.462186 1588554 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0923 10:36:44.463828 1588554 out.go:177] * Verifying gcp-auth addon...
	I0923 10:36:44.466561 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:36:44.469735 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:36:44.571760 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.616121 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.045508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.116582 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.342074 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:45.544902 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.617645 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.044759 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.117793 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.546485 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.616891 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.840864 1588554 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.840888 1588554 pod_ready.go:82] duration metric: took 6.006520139s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.840899 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846458 1588554 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.846487 1588554 pod_ready.go:82] duration metric: took 5.579842ms for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846499 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.850991 1588554 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.851013 1588554 pod_ready.go:82] duration metric: took 4.506621ms for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.851020 1588554 pod_ready.go:39] duration metric: took 10.032714922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:46.851040 1588554 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:36:46.851099 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:46.875129 1588554 api_server.go:72] duration metric: took 10.525769516s to wait for apiserver process to appear ...
	I0923 10:36:46.875164 1588554 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:36:46.875191 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:46.879815 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:46.880904 1588554 api_server.go:141] control plane version: v1.31.1
	I0923 10:36:46.880933 1588554 api_server.go:131] duration metric: took 5.761723ms to wait for apiserver health ...
	I0923 10:36:46.880944 1588554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:36:46.889660 1588554 system_pods.go:59] 16 kube-system pods found
	I0923 10:36:46.889699 1588554 system_pods.go:61] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.889712 1588554 system_pods.go:61] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.889722 1588554 system_pods.go:61] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.889739 1588554 system_pods.go:61] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.889746 1588554 system_pods.go:61] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.889752 1588554 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.889759 1588554 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.889765 1588554 system_pods.go:61] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.889770 1588554 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.889777 1588554 system_pods.go:61] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.889783 1588554 system_pods.go:61] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.889793 1588554 system_pods.go:61] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.889800 1588554 system_pods.go:61] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.889810 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889821 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889826 1588554 system_pods.go:61] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.889835 1588554 system_pods.go:74] duration metric: took 8.88361ms to wait for pod list to return data ...
	I0923 10:36:46.889844 1588554 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:36:46.892857 1588554 default_sa.go:45] found service account: "default"
	I0923 10:36:46.892882 1588554 default_sa.go:55] duration metric: took 3.031168ms for default service account to be created ...
	I0923 10:36:46.892893 1588554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:36:46.901634 1588554 system_pods.go:86] 16 kube-system pods found
	I0923 10:36:46.901674 1588554 system_pods.go:89] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.901688 1588554 system_pods.go:89] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.901699 1588554 system_pods.go:89] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.901714 1588554 system_pods.go:89] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.901725 1588554 system_pods.go:89] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.901732 1588554 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.901741 1588554 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.901747 1588554 system_pods.go:89] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.901753 1588554 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.901767 1588554 system_pods.go:89] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.901776 1588554 system_pods.go:89] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.901784 1588554 system_pods.go:89] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.901790 1588554 system_pods.go:89] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.901801 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901810 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901814 1588554 system_pods.go:89] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.901824 1588554 system_pods.go:126] duration metric: took 8.925234ms to wait for k8s-apps to be running ...
	I0923 10:36:46.901834 1588554 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:36:46.901887 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:46.916755 1588554 system_svc.go:56] duration metric: took 14.881074ms WaitForService to wait for kubelet
	I0923 10:36:46.916789 1588554 kubeadm.go:582] duration metric: took 10.567438885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:46.916809 1588554 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:36:46.920579 1588554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:36:46.920616 1588554 node_conditions.go:123] node cpu capacity is 8
	I0923 10:36:46.920632 1588554 node_conditions.go:105] duration metric: took 3.817539ms to run NodePressure ...
	I0923 10:36:46.920648 1588554 start.go:241] waiting for startup goroutines ...
	I0923 10:36:47.045158 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.117155 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:47.572416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.616622 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.045426 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.116767 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.573214 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.616845 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.044221 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.117209 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.543831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.615831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.044752 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.117047 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.572160 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.617157 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:51.045029 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:51.116892 1588554 kapi.go:107] duration metric: took 14.004458573s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:36:51.571831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.044681 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.544488 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.071964 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.544286 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.044362 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.572181 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.073837 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.544285 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.044544 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.545079 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.044265 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.544710 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.074493 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.544754 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.044416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.545731 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.044364 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.545006 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.043696 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.544143 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.044850 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.544007 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.073713 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.544432 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.044116 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.544249 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.084663 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.545630 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.073711 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.545674 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.074336 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.573379 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.072260 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.573326 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.046665 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.572302 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:10.044323 1588554 kapi.go:107] duration metric: took 30.504755495s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:42:44.467839 1588554 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 10:42:44.467877 1588554 kapi.go:107] duration metric: took 6m0.001323817s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 10:42:44.467989 1588554 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 10:42:44.469896 1588554 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver
	I0923 10:42:44.471562 1588554 addons.go:510] duration metric: took 6m8.126806783s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver]
	I0923 10:42:44.471618 1588554 start.go:246] waiting for cluster config update ...
	I0923 10:42:44.471643 1588554 start.go:255] writing updated cluster config ...
	I0923 10:42:44.471977 1588554 exec_runner.go:51] Run: rm -f paused
	I0923 10:42:44.523125 1588554 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:42:44.524945 1588554 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 10:48:45 UTC. --
	Sep 23 10:38:34 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:38:34Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:38:38 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:38:38Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:38:42 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:38:42Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:38:43 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:38:43.578834183Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:38:43 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:38:43.578838348Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:38:43 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:38:43.580779747Z" level=error msg="Error running exec 8838e2670a88a9bf36c5939c4d717e9cf4ecb3a5e2ba01162dc7e81ca0b809a3 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=c00a6f27e28707c6 traceID=910e559f8e6555c896c8cf8584eb4b08
	Sep 23 10:38:43 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:38:43.785563896Z" level=info msg="ignoring event" container=8e764833448cda7cbb8e58d0d13c9d15d232a35640c17dbb5b5801b6f530938a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:56 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:39:56Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:39:59 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:39:59Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:40:02 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:40:02Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:40:14 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:40:14Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:40:15 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:40:15.705531535Z" level=info msg="ignoring event" container=d89ac4009f96a5930175fc54a170f24a7d2ebb3f21412ffe06746a8a75281462 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:42:45 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:45Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:42:46 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:46Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:42:50 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:50Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:42:59 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:59Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:43:00 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:43:00.881341653Z" level=info msg="ignoring event" container=479fe5cc32913c30ee1f61f86ce466c10554b176126704459014bdbdced160af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:47:49 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:49Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:47:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:52Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:47:53 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:53Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:48:04 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:48:04Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540680915Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540684219Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.542670843Z" level=error msg="Error running exec 5fd2d79e980950ca565c3a912c8440ea08719c5a16c1780c5869c00f977ccd0f in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=5608c228de976ea9 traceID=04969482329070952bf3db909444f8ca
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.744401240Z" level=info msg="ignoring event" container=3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	3827f0f3d5112       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            41 seconds ago      Exited              gadget                                   7                   f44622d46ba2f       gadget-cc7cr
	1c0aec03476e1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	f22e4f1571647       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	b43acbe9c46ae       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	80af8a926afc3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	6f57e7ad00a9e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	369c356333963       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   83f21cc9148ed       csi-hostpath-resizer-0
	764a5f36015a2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	5e03ecec68932       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   04bee9af65b88       csi-hostpath-attacher-0
	2a9c9054db024       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   954881763f4d2       snapshot-controller-56fcc65765-xp8jb
	5189bf51dfe60       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   3a5a27bdb1e27       snapshot-controller-56fcc65765-ncqwr
	100fd02a1faf5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        11 minutes ago      Running             yakd                                     0                   aad214bb107e1       yakd-dashboard-67d98fc6b-j4j2x
	7df30468750a3       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   26d7d65f4a110       metrics-server-84c5f94fbc-l8xpt
	e6929e7afa035       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   45d7b20be1819       cloud-spanner-emulator-5b584cc74-97lv7
	88b34955ceb18       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   34f59459d9996       local-path-provisioner-86d989889c-r6cj8
	cc089ff435908       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Running             registry                                 0                   b877c8259724a       registry-66c9cd494c-xghlh
	9740e1ab45dff       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Running             registry-proxy                           0                   d6ea241113e50       registry-proxy-j2dg7
	71c8aef5c5c24       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     12 minutes ago      Running             nvidia-device-plugin-ctr                 0                   2b86e9d29eb33       nvidia-device-plugin-daemonset-rmgc2
	c98c33bab4e43       c69fa2e9cbf5f                                                                                                                                12 minutes ago      Running             coredns                                  0                   f681430aabf24       coredns-7c65d6cfc9-p5xcl
	045fad5ce6ab4       60c005f310ff3                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   6e8a6bce97790       kube-proxy-275md
	a88800a1ce5b9       6e38f40d628db                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   e04842fad72fa       storage-provisioner
	e008cb9d44fcb       175ffd71cce3d                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   2f63f87bd15d1       kube-controller-manager-ubuntu-20-agent-12
	cefe11af8e634       9aa1fad941575                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   3f8185d06efd3       kube-scheduler-ubuntu-20-agent-12
	98649c04ed191       6bab7719df100                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   60b7c561b6237       kube-apiserver-ubuntu-20-agent-12
	891452784bf9b       2e96e5913fc06                                                                                                                                12 minutes ago      Running             etcd                                     0                   087dc8c7c97f8       etcd-ubuntu-20-agent-12
	
	
	==> coredns [c98c33bab4e4] <==
	[INFO] 10.244.0.5:39130 - 49408 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011371s
	[INFO] 10.244.0.5:36683 - 40984 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092092s
	[INFO] 10.244.0.5:36683 - 54814 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177141s
	[INFO] 10.244.0.5:48486 - 28442 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000086929s
	[INFO] 10.244.0.5:48486 - 5406 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000127637s
	[INFO] 10.244.0.5:59402 - 60382 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000079785s
	[INFO] 10.244.0.5:59402 - 6106 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000100251s
	[INFO] 10.244.0.5:56367 - 45414 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00007586s
	[INFO] 10.244.0.5:56367 - 44632 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000107663s
	[INFO] 10.244.0.5:56779 - 21145 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071153s
	[INFO] 10.244.0.5:56779 - 17307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139638s
	[INFO] 10.244.0.5:50701 - 22008 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010586s
	[INFO] 10.244.0.5:50701 - 60925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136235s
	[INFO] 10.244.0.5:34160 - 49361 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079304s
	[INFO] 10.244.0.5:34160 - 47831 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000185735s
	[INFO] 10.244.0.5:46275 - 16771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008177s
	[INFO] 10.244.0.5:46275 - 49536 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108335s
	[INFO] 10.244.0.5:47968 - 20526 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.00008698s
	[INFO] 10.244.0.5:47968 - 10797 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000120657s
	[INFO] 10.244.0.5:37248 - 56533 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000080178s
	[INFO] 10.244.0.5:37248 - 45520 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000103163s
	[INFO] 10.244.0.5:39385 - 32664 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000082135s
	[INFO] 10.244.0.5:39385 - 56732 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000177006s
	[INFO] 10.244.0.5:37963 - 19331 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068935s
	[INFO] 10.244.0.5:37963 - 62598 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104055s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-12
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-12
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-12
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-12"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-12
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:48:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.128.15.239
	  Hostname:    ubuntu-20-agent-12
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                26e2d22b-def2-c216-b2a9-007020fa8ce7
	  Boot ID:                    83656df0-482a-417d-b7fc-90bc5fb37652
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-97lv7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-cc7cr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-p5xcl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-2flxk                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ubuntu-20-agent-12                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-ubuntu-20-agent-12             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-12    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-275md                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ubuntu-20-agent-12             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-l8xpt               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-rmgc2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-66c9cd494c-xghlh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-j2dg7                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-ncqwr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-xp8jb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-r6cj8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-admission-7f54bd7598-rfghv            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-admission-init-gh7z4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-controllers-5ff7c5d4db-529t5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-scheduler-79dc4b78bb-zdd4g            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-j4j2x                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node ubuntu-20-agent-12 event: Registered Node ubuntu-20-agent-12 in Controller
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 28 f8 d2 0a cd 08 06
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7e b8 fc 4c f3 9c 08 06
	[Sep23 10:36] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[ +10.128758] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 aa 9b fb 38 08 06
	[  +0.000410] IPv4: martian source 10.244.0.5 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[  +2.001125] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 02 27 ad 4b 0d 08 06
	[  +0.032532] IPv4: martian source 10.244.0.5 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e ed 25 59 75 f3 08 06
	[  +3.912883] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 62 ba d6 13 c3 e3 08 06
	[  +2.709643] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 66 31 90 37 c7 08 06
	[  +0.019221] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 1d 22 9e 8e 47 08 06
	[  +9.151781] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 ca ad 28 d8 56 08 06
	[  +0.348439] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 59 84 5e b0 7b 08 06
	[  +0.569834] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e c1 ff 28 29 42 08 06
	
	
	==> etcd [891452784bf9] <==
	{"level":"info","ts":"2024-09-23T10:36:28.599143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.599153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac received MsgVoteResp from dd041fa4dc6d4aac at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.599207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac became leader at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.599225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dd041fa4dc6d4aac elected leader dd041fa4dc6d4aac at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.600162Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.600816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:28.600810Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dd041fa4dc6d4aac","local-member-attributes":"{Name:ubuntu-20-agent-12 ClientURLs:[https://10.128.15.239:2379]}","request-path":"/0/members/dd041fa4dc6d4aac/attributes","cluster-id":"c05a044d5786a1e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:36:28.600843Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:28.600903Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c05a044d5786a1e7","local-member-id":"dd041fa4dc6d4aac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.600975Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601004Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601891Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602013Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.239:2379"}
	{"level":"info","ts":"2024-09-23T10:36:28.603219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:36:44.242056Z","caller":"traceutil/trace.go:171","msg":"trace[1467056625] linearizableReadLoop","detail":"{readStateIndex:849; appliedIndex:845; }","duration":"128.026224ms","start":"2024-09-23T10:36:44.114013Z","end":"2024-09-23T10:36:44.242039Z","steps":["trace[1467056625] 'read index received'  (duration: 46.430648ms)","trace[1467056625] 'applied index is now lower than readState.Index'  (duration: 81.594963ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:36:44.242093Z","caller":"traceutil/trace.go:171","msg":"trace[2126161537] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"134.824059ms","start":"2024-09-23T10:36:44.107242Z","end":"2024-09-23T10:36:44.242066Z","steps":["trace[2126161537] 'process raft request'  (duration: 123.210784ms)","trace[2126161537] 'compare'  (duration: 11.439426ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:36:44.242290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.188403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:36:44.242444Z","caller":"traceutil/trace.go:171","msg":"trace[1472265816] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:832; }","duration":"128.418389ms","start":"2024-09-23T10:36:44.114009Z","end":"2024-09-23T10:36:44.242428Z","steps":["trace[1472265816] 'agreement among raft nodes before linearized reading'  (duration: 128.138624ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:36:44.242340Z","caller":"traceutil/trace.go:171","msg":"trace[1535126050] transaction","detail":"{read_only:false; response_revision:832; number_of_response:1; }","duration":"133.407624ms","start":"2024-09-23T10:36:44.108904Z","end":"2024-09-23T10:36:44.242312Z","steps":["trace[1535126050] 'process raft request'  (duration: 133.085569ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:46:28.621172Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1493}
	{"level":"info","ts":"2024-09-23T10:46:28.644160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1493,"took":"22.540162ms","hash":974073395,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":3624960,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T10:46:28.644213Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":974073395,"revision":1493,"compact-revision":-1}
	
	
	==> kernel <==
	 10:48:45 up 1 day, 16:31,  0 users,  load average: 0.05, 0.25, 0.78
	Linux ubuntu-20-agent-12 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [98649c04ed19] <==
	E0923 10:44:47.558874       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	E0923 10:44:47.558845       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:44:47.560414       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:44:47.560423       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:45:47.569479       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:45:47.569530       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:45:47.569479       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:45:47.569561       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:45:47.571928       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:45:47.571932       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:46:47.580308       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:46:47.580357       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:46:47.580308       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:46:47.580420       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:46:47.581914       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:46:47.581915       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:47:39.908318       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:47:39.908367       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:47:39.910002       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:47:47.588246       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	W0923 10:47:47.588273       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:47:47.588299       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.588306       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:47:47.589909       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:47:47.589914       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	
	
	==> kube-controller-manager [e008cb9d44fc] <==
	E0923 10:44:47.560938       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:44:47.560979       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:44:47.562097       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:44:47.562135       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:45:47.572568       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:45:47.572672       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:45:47.573917       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:45:47.573940       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.582512       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.582520       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.583692       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.583700       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 10:47:39.910608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="7.170343ms"
	E0923 10:47:39.910642       1 replica_set.go:560] "Unhandled Error" err="sync \"gcp-auth/gcp-auth-89d5ffd79\" failed with Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 10:47:46.737853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-12"
	E0923 10:47:47.590521       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.590574       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.591719       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.591731       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 10:48:03.173184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="64.04µs"
	I0923 10:48:04.173914       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="73.789µs"
	I0923 10:48:07.171864       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 10:48:16.172963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="63.638µs"
	I0923 10:48:19.171385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="65.324µs"
	I0923 10:48:22.173858       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	
	
	==> kube-proxy [045fad5ce6ab] <==
	I0923 10:36:38.573406       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:36:38.729619       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.239"]
	E0923 10:36:38.729768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:36:38.818441       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:36:38.818516       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:36:38.825889       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:36:38.826286       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:36:38.826330       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:36:38.829447       1 config.go:328] "Starting node config controller"
	I0923 10:36:38.829476       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:36:38.830499       1 config.go:199] "Starting service config controller"
	I0923 10:36:38.830549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:36:38.830606       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:36:38.830612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:36:38.931771       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:36:38.931860       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:36:38.938436       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cefe11af8e63] <==
	W0923 10:36:30.422004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:36:30.422053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.448133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:36:30.448193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.597590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:36:30.597642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.627316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.627362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.638928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:36:30.638980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.639681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:36:30.639714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.656288       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:36:30.656331       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:36:30.673851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.673901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.732651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.732705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.750217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:36:30.750269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.788871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.788927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.793547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:36:30.793590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:36:32.724371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 10:48:45 UTC. --
	Sep 23 10:47:53 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:47:53.284003 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"no such image: \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:48:03 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:03.164681 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:48:04 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:04.162013 1590014 scope.go:117] "RemoveContainer" containerID="479fe5cc32913c30ee1f61f86ce466c10554b176126704459014bdbdced160af"
	Sep 23 10:48:04 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:04.164226 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:48:06 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:06.547844 1590014 scope.go:117] "RemoveContainer" containerID="479fe5cc32913c30ee1f61f86ce466c10554b176126704459014bdbdced160af"
	Sep 23 10:48:06 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:06.548246 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:48:06 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:06.548472 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	Sep 23 10:48:07 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:07.164469 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:48:08 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:08.186180 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:48:08 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:08.186432 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	Sep 23 10:48:16 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:16.164522 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:48:19 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:19.164214 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:48:20 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:20.162646 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:48:20 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:20.162847 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	Sep 23 10:48:22 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:22.165570 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:48:28 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:28.164162 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:48:32 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:32.163104 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:48:32 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:32.163345 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	Sep 23 10:48:32 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:32.164949 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:48:36 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:36.164152 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:48:42 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:42.162873 1590014 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-97lv7" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:48:42 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:42.164915 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:48:44 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:44.164408 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:48:45 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:48:45.162239 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:48:45 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:45.162510 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	
	
	==> storage-provisioner [a88800a1ce5b] <==
	I0923 10:36:38.418197       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:36:38.433696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:36:38.433749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:36:38.445674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:36:38.446763       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	I0923 10:36:38.449267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35a6bb7a-1e48-4bf9-816a-2d141c61bd81", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0 became leader
	I0923 10:36:38.547698       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1 (70.888462ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-rfghv" not found
	Error from server (NotFound): pods "volcano-admission-init-gh7z4" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-529t5" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-zdd4g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1
--- FAIL: TestAddons/serial/Volcano (361.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (47.66s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context minikube get secret gcp-auth -n new-namespace: exit status 1 (66.284144ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context minikube logs -l app=gcp-auth -n gcp-auth
I0923 10:48:46.369552 1584534 retry.go:31] will retry after 1.664178441s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context minikube get secret gcp-auth -n new-namespace: exit status 1 (67.505485ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context minikube logs -l app=gcp-auth -n gcp-auth
I0923 10:48:48.169830 1584534 retry.go:31] will retry after 1.647545919s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context minikube get secret gcp-auth -n new-namespace: exit status 1 (66.553335ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context minikube logs -l app=gcp-auth -n gcp-auth
I0923 10:48:49.952045 1584534 retry.go:31] will retry after 5.385118885s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context minikube get secret gcp-auth -n new-namespace: exit status 1 (77.072472ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context minikube logs -l app=gcp-auth -n gcp-auth
I0923 10:48:55.482313 1584534 retry.go:31] will retry after 8.38654618s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context minikube get secret gcp-auth -n new-namespace: exit status 1 (68.817777ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context minikube logs -l app=gcp-auth -n gcp-auth
I0923 10:49:04.009899 1584534 retry.go:31] will retry after 12.086890097s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context minikube get secret gcp-auth -n new-namespace: exit status 1 (66.939983ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context minikube logs -l app=gcp-auth -n gcp-auth
I0923 10:49:16.229864 1584534 retry.go:31] will retry after 17.467157033s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context minikube get secret gcp-auth -n new-namespace: exit status 1 (66.534284ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context minikube logs -l app=gcp-auth -n gcp-auth
addons_test.go:616: failed to get secret: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
--- FAIL: TestAddons/serial/GCPAuth/Namespaces (47.66s)

                                                
                                    
x
+
TestAddons/parallel/Registry (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.752018ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004356766s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003542836s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (66.755808ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got **
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/23 10:49:45 [DEBUG] GET http://10.128.15.239:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:42273               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:36 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:42 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:36:19
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:36:19.158069 1588554 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:36:19.158231 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158241 1588554 out.go:358] Setting ErrFile to fd 2...
	I0923 10:36:19.158245 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158464 1588554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	I0923 10:36:19.159125 1588554 out.go:352] Setting JSON to false
	I0923 10:36:19.160039 1588554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":145130,"bootTime":1726942649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:36:19.160160 1588554 start.go:139] virtualization: kvm guest
	I0923 10:36:19.162394 1588554 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:36:19.163650 1588554 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:36:19.163676 1588554 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 10:36:19.163732 1588554 notify.go:220] Checking for updates...
	I0923 10:36:19.166389 1588554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:36:19.167804 1588554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:19.169081 1588554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 10:36:19.170968 1588554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:36:19.172507 1588554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:36:19.174424 1588554 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:36:19.185459 1588554 out.go:177] * Using the none driver based on user configuration
	I0923 10:36:19.186681 1588554 start.go:297] selected driver: none
	I0923 10:36:19.186694 1588554 start.go:901] validating driver "none" against <nil>
	I0923 10:36:19.186706 1588554 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:36:19.186759 1588554 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 10:36:19.187052 1588554 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0923 10:36:19.187561 1588554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:36:19.187804 1588554 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:19.187836 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:19.187883 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:19.187891 1588554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:36:19.187950 1588554 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:19.190491 1588554 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 10:36:19.192247 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.192296 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json: {Name:mk0db601d978f1f6b111e723fd0658218dee1a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:19.192505 1588554 start.go:360] acquireMachinesLock for minikube: {Name:mka47a0638fa8ca4d22f1fa46c51878d308fb6cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:36:19.192555 1588554 start.go:364] duration metric: took 26.854µs to acquireMachinesLock for "minikube"
	I0923 10:36:19.192576 1588554 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:19.192689 1588554 start.go:125] createHost starting for "" (driver="none")
	I0923 10:36:19.194985 1588554 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0923 10:36:19.196198 1588554 exec_runner.go:51] Run: systemctl --version
	I0923 10:36:19.198807 1588554 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0923 10:36:19.198844 1588554 client.go:168] LocalClient.Create starting
	I0923 10:36:19.198929 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem
	I0923 10:36:19.198967 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.198986 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199033 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem
	I0923 10:36:19.199052 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.199065 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199430 1588554 client.go:171] duration metric: took 577.868µs to LocalClient.Create
	I0923 10:36:19.199455 1588554 start.go:167] duration metric: took 651.01µs to libmachine.API.Create "minikube"
	I0923 10:36:19.199461 1588554 start.go:293] postStartSetup for "minikube" (driver="none")
	I0923 10:36:19.199503 1588554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:36:19.199539 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:36:19.209126 1588554 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:36:19.209149 1588554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:36:19.209157 1588554 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:36:19.210966 1588554 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0923 10:36:19.212083 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/addons for local assets ...
	I0923 10:36:19.212135 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/files for local assets ...
	I0923 10:36:19.212155 1588554 start.go:296] duration metric: took 12.687054ms for postStartSetup
	I0923 10:36:19.212795 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.212933 1588554 start.go:128] duration metric: took 20.232501ms to createHost
	I0923 10:36:19.212946 1588554 start.go:83] releasing machines lock for "minikube", held for 20.378727ms
	I0923 10:36:19.213290 1588554 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:36:19.213405 1588554 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0923 10:36:19.215275 1588554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:36:19.215410 1588554 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:36:19.225131 1588554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 10:36:19.225172 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.225207 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.225324 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.246269 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:36:19.256037 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:36:19.265994 1588554 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:36:19.266081 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:36:19.276368 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.286490 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:36:19.297389 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.307066 1588554 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:36:19.316656 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:36:19.326288 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:36:19.336363 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:36:19.346290 1588554 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:36:19.355338 1588554 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:36:19.364071 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:19.577952 1588554 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0923 10:36:19.651036 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.651102 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.651252 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.672247 1588554 exec_runner.go:51] Run: which cri-dockerd
	I0923 10:36:19.673216 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:36:19.681044 1588554 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0923 10:36:19.681067 1588554 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.681103 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.689425 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:36:19.689591 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4059772120 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.698668 1588554 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0923 10:36:19.932327 1588554 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0923 10:36:20.150083 1588554 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:36:20.150282 1588554 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0923 10:36:20.150300 1588554 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0923 10:36:20.150338 1588554 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0923 10:36:20.158569 1588554 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:36:20.158734 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2996454661 /etc/docker/daemon.json
	I0923 10:36:20.168354 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:20.379218 1588554 exec_runner.go:51] Run: sudo systemctl restart docker
	I0923 10:36:20.693080 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:36:20.705085 1588554 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0923 10:36:20.723552 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:20.735597 1588554 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:36:20.953725 1588554 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0923 10:36:21.177941 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.410173 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0923 10:36:21.423706 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:21.435794 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.688698 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0923 10:36:21.764452 1588554 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:36:21.764538 1588554 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0923 10:36:21.765977 1588554 start.go:563] Will wait 60s for crictl version
	I0923 10:36:21.766041 1588554 exec_runner.go:51] Run: which crictl
	I0923 10:36:21.767183 1588554 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0923 10:36:21.799990 1588554 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 10:36:21.800066 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.821449 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.845424 1588554 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 10:36:21.845506 1588554 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0923 10:36:21.848567 1588554 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0923 10:36:21.850015 1588554 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:36:21.850144 1588554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:36:21.850155 1588554 kubeadm.go:934] updating node { 10.128.15.239 8443 v1.31.1 docker true true} ...
	I0923 10:36:21.850253 1588554 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-12 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.239 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0923 10:36:21.850310 1588554 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0923 10:36:21.901691 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:21.901719 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:21.901730 1588554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:36:21.901755 1588554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.239 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-12 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:36:21.901910 1588554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-12"
	  kubeletExtraArgs:
	    node-ip: 10.128.15.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:36:21.901970 1588554 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.910706 1588554 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:36:21.910760 1588554 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.918867 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:36:21.918878 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:36:21.918874 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:36:21.919007 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:21.931740 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:36:21.973404 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2218285672 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:36:21.975632 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube621796612 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:36:22.005095 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3553074774 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:36:22.078082 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:36:22.087582 1588554 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0923 10:36:22.087606 1588554 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.087647 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.095444 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0923 10:36:22.095602 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4110124182 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.105645 1588554 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0923 10:36:22.105666 1588554 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0923 10:36:22.105700 1588554 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0923 10:36:22.113822 1588554 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:36:22.114022 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3324119727 /lib/systemd/system/kubelet.service
	I0923 10:36:22.123427 1588554 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0923 10:36:22.123598 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3318915681 /var/tmp/minikube/kubeadm.yaml.new
	I0923 10:36:22.131907 1588554 exec_runner.go:51] Run: grep 10.128.15.239	control-plane.minikube.internal$ /etc/hosts
	I0923 10:36:22.133649 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:22.363463 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:22.378439 1588554 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube for IP: 10.128.15.239
	I0923 10:36:22.378459 1588554 certs.go:194] generating shared ca certs ...
	I0923 10:36:22.378479 1588554 certs.go:226] acquiring lock for ca certs: {Name:mk757d3be8cf2fb32b8856d4b5e3173183901a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.378637 1588554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key
	I0923 10:36:22.378678 1588554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key
	I0923 10:36:22.378687 1588554 certs.go:256] generating profile certs ...
	I0923 10:36:22.378744 1588554 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key
	I0923 10:36:22.378763 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt with IP's: []
	I0923 10:36:22.592011 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt ...
	I0923 10:36:22.592085 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt: {Name:mk1bdb710d99b77b32099c81dc261479f881a61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592249 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key ...
	I0923 10:36:22.592262 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key: {Name:mk990e2a3a19cc03d4722edbfa635f5e467b2b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592353 1588554 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83
	I0923 10:36:22.592371 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.239]
	I0923 10:36:22.826429 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 ...
	I0923 10:36:22.826468 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83: {Name:mkdaa76b99a75fc999a744f15c5aa0e73646ad27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826632 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 ...
	I0923 10:36:22.826650 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83: {Name:mk5c84f7ccec239df3b3f71560e288a437b89d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826728 1588554 certs.go:381] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt
	I0923 10:36:22.826837 1588554 certs.go:385] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key
	I0923 10:36:22.826896 1588554 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key
	I0923 10:36:22.826913 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0923 10:36:22.988376 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt ...
	I0923 10:36:22.988415 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt: {Name:mk1a79d5dbe06be337e3230425d1c5cb0b5c9c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988572 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key ...
	I0923 10:36:22.988587 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key: {Name:mk7f2be748011aa06064cd625f3afbd5fec49aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988800 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:36:22.988842 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:36:22.988874 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:36:22.988896 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/key.pem (1675 bytes)
	I0923 10:36:22.989638 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:36:22.989763 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube32048499 /var/lib/minikube/certs/ca.crt
	I0923 10:36:22.999482 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 10:36:22.999627 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2462737595 /var/lib/minikube/certs/ca.key
	I0923 10:36:23.008271 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:36:23.008403 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2315409218 /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:36:23.016619 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:36:23.016796 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2778680620 /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:36:23.026283 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0923 10:36:23.026429 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563673913 /var/lib/minikube/certs/apiserver.crt
	I0923 10:36:23.034367 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:36:23.034559 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1327376112 /var/lib/minikube/certs/apiserver.key
	I0923 10:36:23.043236 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:36:23.043385 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3861098534 /var/lib/minikube/certs/proxy-client.crt
	I0923 10:36:23.053261 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:36:23.053393 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1865989171 /var/lib/minikube/certs/proxy-client.key
	I0923 10:36:23.062749 1588554 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0923 10:36:23.062771 1588554 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.062810 1588554 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.070407 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:36:23.070572 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2921020744 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.078922 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:36:23.079082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1931847277 /var/lib/minikube/kubeconfig
	I0923 10:36:23.087191 1588554 exec_runner.go:51] Run: openssl version
	I0923 10:36:23.090067 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:36:23.098811 1588554 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100243 1588554 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100280 1588554 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.103237 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:36:23.112696 1588554 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:36:23.113952 1588554 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:36:23.113993 1588554 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:23.114121 1588554 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:36:23.130863 1588554 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:36:23.141170 1588554 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:36:23.154896 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:23.177871 1588554 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:36:23.186183 1588554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:36:23.186207 1588554 kubeadm.go:157] found existing configuration files:
	
	I0923 10:36:23.186251 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:36:23.195211 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:36:23.195272 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:36:23.203608 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:36:23.212052 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:36:23.212118 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:36:23.220697 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.231762 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:36:23.231826 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.239886 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:36:23.250151 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:36:23.250215 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:36:23.257852 1588554 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:36:23.292982 1588554 kubeadm.go:310] W0923 10:36:23.292852 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.293485 1588554 kubeadm.go:310] W0923 10:36:23.293445 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.295381 1588554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:36:23.295429 1588554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:36:23.388509 1588554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:36:23.388613 1588554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:36:23.388622 1588554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:36:23.388626 1588554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:36:23.400110 1588554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:36:23.403660 1588554 out.go:235]   - Generating certificates and keys ...
	I0923 10:36:23.403706 1588554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:36:23.403719 1588554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:36:23.479635 1588554 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:36:23.612116 1588554 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:36:23.692069 1588554 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:36:23.926999 1588554 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:36:24.011480 1588554 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:36:24.011600 1588554 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.104614 1588554 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:36:24.104769 1588554 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.304540 1588554 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:36:24.538700 1588554 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:36:24.615897 1588554 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:36:24.616110 1588554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:36:24.791653 1588554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:36:24.910277 1588554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:36:25.215908 1588554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:36:25.289127 1588554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:36:25.490254 1588554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:36:25.490804 1588554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:36:25.493193 1588554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:36:25.495266 1588554 out.go:235]   - Booting up control plane ...
	I0923 10:36:25.495299 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:36:25.495318 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:36:25.495739 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:36:25.515279 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:36:25.519949 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:36:25.519979 1588554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:36:25.765044 1588554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:36:25.765080 1588554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:36:26.266756 1588554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690653ms
	I0923 10:36:26.266797 1588554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:36:31.268595 1588554 kubeadm.go:310] [api-check] The API server is healthy after 5.001820679s
	I0923 10:36:31.279620 1588554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:36:31.290992 1588554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:36:31.308130 1588554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:36:31.308158 1588554 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-12 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:36:31.315634 1588554 kubeadm.go:310] [bootstrap-token] Using token: vj37sq.3v8d1kp1945z41wj
	I0923 10:36:31.316963 1588554 out.go:235]   - Configuring RBAC rules ...
	I0923 10:36:31.317008 1588554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:36:31.320391 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:36:31.328142 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:36:31.330741 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:36:31.333381 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:36:31.335890 1588554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:36:31.675856 1588554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:36:32.106847 1588554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:36:32.674219 1588554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:36:32.675126 1588554 kubeadm.go:310] 
	I0923 10:36:32.675137 1588554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:36:32.675141 1588554 kubeadm.go:310] 
	I0923 10:36:32.675148 1588554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:36:32.675152 1588554 kubeadm.go:310] 
	I0923 10:36:32.675156 1588554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:36:32.675160 1588554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:36:32.675164 1588554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:36:32.675171 1588554 kubeadm.go:310] 
	I0923 10:36:32.675175 1588554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:36:32.675179 1588554 kubeadm.go:310] 
	I0923 10:36:32.675184 1588554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:36:32.675188 1588554 kubeadm.go:310] 
	I0923 10:36:32.675192 1588554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:36:32.675196 1588554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:36:32.675207 1588554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:36:32.675211 1588554 kubeadm.go:310] 
	I0923 10:36:32.675217 1588554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:36:32.675221 1588554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:36:32.675225 1588554 kubeadm.go:310] 
	I0923 10:36:32.675228 1588554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675233 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c \
	I0923 10:36:32.675237 1588554 kubeadm.go:310] 	--control-plane 
	I0923 10:36:32.675242 1588554 kubeadm.go:310] 
	I0923 10:36:32.675246 1588554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:36:32.675252 1588554 kubeadm.go:310] 
	I0923 10:36:32.675255 1588554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675258 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c 
	I0923 10:36:32.679087 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:32.679120 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:32.680982 1588554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:36:32.682253 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:36:32.692879 1588554 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:36:32.693059 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3098274276 /etc/cni/net.d/1-k8s.conflist
	I0923 10:36:32.704393 1588554 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:36:32.704473 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:32.704510 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-12 minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0923 10:36:32.713564 1588554 ops.go:34] apiserver oom_adj: -16
	I0923 10:36:32.777699 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.277929 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.778034 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.278552 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.777937 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.278677 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.777756 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.278547 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.343720 1588554 kubeadm.go:1113] duration metric: took 3.63930993s to wait for elevateKubeSystemPrivileges
	I0923 10:36:36.343761 1588554 kubeadm.go:394] duration metric: took 13.229771538s to StartCluster
	I0923 10:36:36.343783 1588554 settings.go:142] acquiring lock: {Name:mkf413d2c932a8f45f91708eee4886fc43a35e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.343846 1588554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:36.344451 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/kubeconfig: {Name:mk42cd91ee317759dd4ab26721004c644d4d46c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.344664 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:36:36.344755 1588554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:36:36.344891 1588554 addons.go:69] Setting yakd=true in profile "minikube"
	I0923 10:36:36.344910 1588554 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0923 10:36:36.344913 1588554 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0923 10:36:36.344939 1588554 addons.go:69] Setting registry=true in profile "minikube"
	I0923 10:36:36.344931 1588554 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0923 10:36:36.344946 1588554 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0923 10:36:36.344964 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0923 10:36:36.344976 1588554 addons.go:234] Setting addon registry=true in "minikube"
	I0923 10:36:36.344980 1588554 mustload.go:65] Loading cluster: minikube
	I0923 10:36:36.344979 1588554 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0923 10:36:36.344992 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.344990 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345000 1588554 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0923 10:36:36.345005 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345031 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345045 1588554 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0923 10:36:36.345072 1588554 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0923 10:36:36.345087 1588554 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0923 10:36:36.345088 1588554 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:36.345104 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345114 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345179 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345317 1588554 addons.go:69] Setting volcano=true in profile "minikube"
	I0923 10:36:36.345335 1588554 addons.go:234] Setting addon volcano=true in "minikube"
	I0923 10:36:36.345361 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345658 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345675 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345680 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345690 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345758 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345762 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345775 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345780 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345807 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345824 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345827 1588554 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0923 10:36:36.345827 1588554 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0923 10:36:36.344919 1588554 addons.go:234] Setting addon yakd=true in "minikube"
	I0923 10:36:36.345839 1588554 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0923 10:36:36.345843 1588554 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0923 10:36:36.344930 1588554 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0923 10:36:36.345858 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345860 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345874 1588554 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0923 10:36:36.345918 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345811 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346177 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346191 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346221 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346328 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346342 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346371 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346524 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346536 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346550 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.346579 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346655 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346673 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346705 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345810 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346539 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347192 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.347221 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347233 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347253 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347284 1588554 out.go:177] * Configuring local host environment ...
	I0923 10:36:36.345829 1588554 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0923 10:36:36.347650 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0923 10:36:36.348407 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.348430 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.348463 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0923 10:36:36.348690 1588554 out.go:270] * 
	W0923 10:36:36.348780 1588554 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0923 10:36:36.348809 1588554 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0923 10:36:36.348865 1588554 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0923 10:36:36.348897 1588554 out.go:270] * 
	W0923 10:36:36.348999 1588554 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0923 10:36:36.349040 1588554 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0923 10:36:36.349080 1588554 out.go:270] * 
	W0923 10:36:36.349130 1588554 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0923 10:36:36.349173 1588554 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0923 10:36:36.349199 1588554 out.go:270] * 
	W0923 10:36:36.349236 1588554 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0923 10:36:36.349282 1588554 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:36.345810 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.350050 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.350088 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.350710 1588554 out.go:177] * Verifying Kubernetes components...
	I0923 10:36:36.352239 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:36.369581 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.369720 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.370463 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.371382 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.373298 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.379392 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.383028 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385097 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385628 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.385693 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.389742 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.389782 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.389793 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402210 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402325 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402356 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402407 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402488 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402530 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402557 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402328 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.406952 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.406987 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.407339 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.407394 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.414599 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.414632 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.415393 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415455 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.415667 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415722 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.417736 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.417799 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.420551 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.420602 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.421969 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.421994 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.422984 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423319 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423344 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.424659 1588554 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:36:36.424874 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.424899 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.428268 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.428559 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.430071 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:36:36.430076 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:36:36.430207 1588554 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:36:36.431382 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431427 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:36:36.431585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube840197264 /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431790 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.431815 1588554 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:36:36.431987 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1728725482 /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.433518 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:36:36.434702 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:36:36.435367 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.435397 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.436902 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:36:36.438150 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:36:36.439277 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.439337 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.440540 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.440996 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:36:36.442010 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:36:36.442071 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.442098 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.442561 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.442772 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443079 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.443136 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443350 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.443375 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.443492 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443518 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0923 10:36:36.443525 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443566 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443844 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.443879 1588554 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:36:36.444008 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4228746672 /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.444580 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:36:36.446035 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:36:36.446930 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.446950 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.447416 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.448168 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448190 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.448643 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448661 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.449758 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 10:36:36.449765 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.449802 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:36:36.449942 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4141716628 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.452784 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.452686 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 10:36:36.454911 1588554 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:36:36.454973 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.455634 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.456554 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.457231 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.457268 1588554 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:36:36.457428 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1881343942 /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.458064 1588554 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458100 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:36:36.458238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2629288326 /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458427 1588554 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:36:36.458490 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 10:36:36.458554 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:36:36.458748 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.459583 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.459224 1588554 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0923 10:36:36.459875 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.459904 1588554 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.459934 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:36:36.460073 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172599530 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.460516 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.460548 1588554 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:36:36.460695 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1059056177 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.462006 1588554 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.462043 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 10:36:36.462614 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3721652212 /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.464913 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.464936 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.464972 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.467000 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.472480 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:36:36.473238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube726889991 /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.480760 1588554 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0923 10:36:36.480939 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.485106 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.485141 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.485190 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.487844 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.487878 1588554 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:36:36.488012 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601575597 /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.489111 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.491189 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.491220 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:36:36.491369 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3194307137 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.492639 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.492667 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494194 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.494218 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494867 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.498982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.499389 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.500765 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.500800 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:36:36.500956 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1985750997 /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.501929 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.503522 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.507731 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.507981 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:36:36.508221 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2102644874 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.508499 1588554 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:36:36.508667 1588554 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:36:36.509791 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.509885 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:36:36.510186 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.510211 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:36:36.510259 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510276 1588554 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:36:36.510535 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2223790766 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510687 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2284125594 /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.511165 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172030255 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.518843 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.518932 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.519210 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.519243 1588554 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:36:36.519417 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2081246003 /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.527052 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.530307 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.531182 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.531199 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.531224 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:36:36.531366 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2359416048 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.534852 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.534897 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.534862 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.534931 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.534930 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:36:36.534953 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:36:36.535115 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube169766603 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.535148 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube873661914 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.540683 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.547811 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:36:36.548029 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.548063 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:36:36.548238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube411864712 /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.553057 1588554 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:36:36.555188 1588554 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:36:36.555273 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.555312 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:36:36.555486 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4206261347 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.562063 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.562124 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:36:36.562318 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918834683 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.563155 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.563195 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:36:36.563361 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2570607285 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.568213 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.568257 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:36:36.568398 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393911802 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.571999 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.572033 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:36:36.572185 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2353575520 /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.577466 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.577543 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.587661 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.598560 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.598607 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:36:36.598954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2771751730 /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.603217 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.603313 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:36:36.603600 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4069496750 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.604133 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604165 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:36:36.604308 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1964334193 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604545 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.604574 1588554 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:36:36.604700 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2583663156 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.610522 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.610602 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.615633 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.616448 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.616504 1588554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616528 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0923 10:36:36.616540 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616587 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.633448 1588554 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.633487 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:36:36.633636 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3570026092 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.637790 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.637820 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:36:36.637954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2992782773 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.646982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.677372 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.679555 1588554 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:36:36.679857 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4202431507 /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.688839 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.688874 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:36:36.689001 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube389006966 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.693416 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.693456 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:36:36.693585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951849839 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.738946 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.774333 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.774371 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:36:36.774529 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1226040952 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.785891 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.785936 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:36:36.786131 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1330733841 /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.796363 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:36.807897 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.807939 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:36:36.808082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube111334727 /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.814837 1588554 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818242 1588554 node_ready.go:49] node "ubuntu-20-agent-12" has status "Ready":"True"
	I0923 10:36:36.818281 1588554 node_ready.go:38] duration metric: took 3.403871ms for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818293 1588554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:36.823705 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.828322 1588554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:36.832595 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.832627 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:36:36.832974 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1712125769 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.870153 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.870197 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:36:36.870386 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2973576979 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.926104 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.926143 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:36:36.926289 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2280122930 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.938896 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.938934 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:36:36.939070 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1690561903 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.950670 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:37.100928 1588554 addons.go:475] Verifying addon registry=true in "minikube"
	I0923 10:36:37.102814 1588554 out.go:177] * Verifying registry addon...
	I0923 10:36:37.112453 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:36:37.120259 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.187559 1588554 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0923 10:36:37.634285 1588554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.634317 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:37.695664 1588554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0923 10:36:37.724258 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.07719175s)
	I0923 10:36:37.724301 1588554 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0923 10:36:37.739850 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.124159231s)
	I0923 10:36:37.742561 1588554 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0923 10:36:37.849519 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.025767323s)
	I0923 10:36:38.120128 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.376349 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.69890606s)
	W0923 10:36:38.376406 1588554 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.376435 1588554 retry.go:31] will retry after 154.227647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.532717 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:38.617615 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.835917 1588554 pod_ready.go:103] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:39.116010 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.531492 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.580742626s)
	I0923 10:36:39.531534 1588554 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:39.537060 1588554 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:36:39.539558 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:36:39.547478 1588554 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:36:39.547508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:39.616393 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.677521 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.146291745s)
	I0923 10:36:40.048496 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.116802 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.545476 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.617107 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.834321 1588554 pod_ready.go:93] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:40.834347 1588554 pod_ready.go:82] duration metric: took 4.005994703s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:40.834359 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:41.044378 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.144560 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:41.351204 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.818400841s)
	I0923 10:36:41.545380 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.616429 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.044309 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.116963 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.545513 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.616637 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.841366 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:43.045300 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.116762 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:43.431875 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:36:43.432127 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3269004284 /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.445163 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:36:43.445319 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3403460145 /var/lib/minikube/google_cloud_project
	I0923 10:36:43.457431 1588554 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0923 10:36:43.457499 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:43.458127 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:43.458149 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:43.458181 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:43.479053 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:43.491340 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:43.491424 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:43.503388 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:43.503426 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:43.508517 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:43.508577 1588554 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.511610 1588554 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:36:43.513346 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:36:43.514725 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.514758 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:36:43.514881 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube616037526 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.525139 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.525184 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:36:43.525334 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3406397122 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.536623 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.536656 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:36:43.536845 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3654027324 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.544627 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.548001 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.616664 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.106662 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.245172 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.462186 1588554 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0923 10:36:44.463828 1588554 out.go:177] * Verifying gcp-auth addon...
	I0923 10:36:44.466561 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:36:44.469735 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:36:44.571760 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.616121 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.045508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.116582 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.342074 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:45.544902 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.617645 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.044759 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.117793 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.546485 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.616891 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.840864 1588554 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.840888 1588554 pod_ready.go:82] duration metric: took 6.006520139s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.840899 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846458 1588554 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.846487 1588554 pod_ready.go:82] duration metric: took 5.579842ms for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846499 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.850991 1588554 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.851013 1588554 pod_ready.go:82] duration metric: took 4.506621ms for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.851020 1588554 pod_ready.go:39] duration metric: took 10.032714922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:46.851040 1588554 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:36:46.851099 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:46.875129 1588554 api_server.go:72] duration metric: took 10.525769516s to wait for apiserver process to appear ...
	I0923 10:36:46.875164 1588554 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:36:46.875191 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:46.879815 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:46.880904 1588554 api_server.go:141] control plane version: v1.31.1
	I0923 10:36:46.880933 1588554 api_server.go:131] duration metric: took 5.761723ms to wait for apiserver health ...
	I0923 10:36:46.880944 1588554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:36:46.889660 1588554 system_pods.go:59] 16 kube-system pods found
	I0923 10:36:46.889699 1588554 system_pods.go:61] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.889712 1588554 system_pods.go:61] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.889722 1588554 system_pods.go:61] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.889739 1588554 system_pods.go:61] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.889746 1588554 system_pods.go:61] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.889752 1588554 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.889759 1588554 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.889765 1588554 system_pods.go:61] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.889770 1588554 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.889777 1588554 system_pods.go:61] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.889783 1588554 system_pods.go:61] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.889793 1588554 system_pods.go:61] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.889800 1588554 system_pods.go:61] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.889810 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889821 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889826 1588554 system_pods.go:61] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.889835 1588554 system_pods.go:74] duration metric: took 8.88361ms to wait for pod list to return data ...
	I0923 10:36:46.889844 1588554 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:36:46.892857 1588554 default_sa.go:45] found service account: "default"
	I0923 10:36:46.892882 1588554 default_sa.go:55] duration metric: took 3.031168ms for default service account to be created ...
	I0923 10:36:46.892893 1588554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:36:46.901634 1588554 system_pods.go:86] 16 kube-system pods found
	I0923 10:36:46.901674 1588554 system_pods.go:89] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.901688 1588554 system_pods.go:89] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.901699 1588554 system_pods.go:89] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.901714 1588554 system_pods.go:89] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.901725 1588554 system_pods.go:89] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.901732 1588554 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.901741 1588554 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.901747 1588554 system_pods.go:89] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.901753 1588554 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.901767 1588554 system_pods.go:89] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.901776 1588554 system_pods.go:89] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.901784 1588554 system_pods.go:89] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.901790 1588554 system_pods.go:89] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.901801 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901810 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901814 1588554 system_pods.go:89] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.901824 1588554 system_pods.go:126] duration metric: took 8.925234ms to wait for k8s-apps to be running ...
	I0923 10:36:46.901834 1588554 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:36:46.901887 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:46.916755 1588554 system_svc.go:56] duration metric: took 14.881074ms WaitForService to wait for kubelet
	I0923 10:36:46.916789 1588554 kubeadm.go:582] duration metric: took 10.567438885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:46.916809 1588554 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:36:46.920579 1588554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:36:46.920616 1588554 node_conditions.go:123] node cpu capacity is 8
	I0923 10:36:46.920632 1588554 node_conditions.go:105] duration metric: took 3.817539ms to run NodePressure ...
	I0923 10:36:46.920648 1588554 start.go:241] waiting for startup goroutines ...
	I0923 10:36:47.045158 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.117155 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:47.572416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.616622 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.045426 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.116767 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.573214 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.616845 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.044221 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.117209 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.543831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.615831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.044752 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.117047 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.572160 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.617157 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:51.045029 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:51.116892 1588554 kapi.go:107] duration metric: took 14.004458573s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:36:51.571831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.044681 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.544488 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.071964 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.544286 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.044362 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.572181 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.073837 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.544285 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.044544 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.545079 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.044265 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.544710 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.074493 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.544754 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.044416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.545731 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.044364 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.545006 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.043696 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.544143 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.044850 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.544007 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.073713 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.544432 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.044116 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.544249 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.084663 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.545630 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.073711 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.545674 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.074336 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.573379 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.072260 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.573326 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.046665 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.572302 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:10.044323 1588554 kapi.go:107] duration metric: took 30.504755495s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:42:44.467839 1588554 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 10:42:44.467877 1588554 kapi.go:107] duration metric: took 6m0.001323817s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 10:42:44.467989 1588554 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 10:42:44.469896 1588554 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver
	I0923 10:42:44.471562 1588554 addons.go:510] duration metric: took 6m8.126806783s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver]
	I0923 10:42:44.471618 1588554 start.go:246] waiting for cluster config update ...
	I0923 10:42:44.471643 1588554 start.go:255] writing updated cluster config ...
	I0923 10:42:44.471977 1588554 exec_runner.go:51] Run: rm -f paused
	I0923 10:42:44.523125 1588554 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:42:44.524945 1588554 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 10:49:46 UTC. --
	Sep 23 10:38:43 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:38:43.578838348Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:38:43 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:38:43.580779747Z" level=error msg="Error running exec 8838e2670a88a9bf36c5939c4d717e9cf4ecb3a5e2ba01162dc7e81ca0b809a3 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=c00a6f27e28707c6 traceID=910e559f8e6555c896c8cf8584eb4b08
	Sep 23 10:38:43 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:38:43.785563896Z" level=info msg="ignoring event" container=8e764833448cda7cbb8e58d0d13c9d15d232a35640c17dbb5b5801b6f530938a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:56 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:39:56Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:39:59 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:39:59Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:40:02 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:40:02Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:40:14 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:40:14Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:40:15 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:40:15.705531535Z" level=info msg="ignoring event" container=d89ac4009f96a5930175fc54a170f24a7d2ebb3f21412ffe06746a8a75281462 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:42:45 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:45Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:42:46 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:46Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:42:50 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:50Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:42:59 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:59Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:43:00 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:43:00.881341653Z" level=info msg="ignoring event" container=479fe5cc32913c30ee1f61f86ce466c10554b176126704459014bdbdced160af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:47:49 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:49Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:47:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:52Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:47:53 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:53Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:48:04 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:48:04Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540680915Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540684219Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.542670843Z" level=error msg="Error running exec 5fd2d79e980950ca565c3a912c8440ea08719c5a16c1780c5869c00f977ccd0f in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=5608c228de976ea9 traceID=04969482329070952bf3db909444f8ca
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.744401240Z" level=info msg="ignoring event" container=3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.739922067Z" level=info msg="ignoring event" container=cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.812004030Z" level=info msg="ignoring event" container=9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.882744210Z" level=info msg="ignoring event" container=b877c8259724a59128251b16cfbdf29c388b2ab853f4a4a08190f60af4e3434d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.988558076Z" level=info msg="ignoring event" container=d6ea241113e500cf3b405d989c416e01c0bc41267ce5bffed361a01c11edbd21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	3827f0f3d5112       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            About a minute ago   Exited              gadget                                   7                   f44622d46ba2f       gadget-cc7cr
	1c0aec03476e1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          12 minutes ago       Running             csi-snapshotter                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	f22e4f1571647       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          12 minutes ago       Running             csi-provisioner                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	b43acbe9c46ae       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            12 minutes ago       Running             liveness-probe                           0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	80af8a926afc3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           12 minutes ago       Running             hostpath                                 0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	6f57e7ad00a9e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                12 minutes ago       Running             node-driver-registrar                    0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	369c356333963       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              12 minutes ago       Running             csi-resizer                              0                   83f21cc9148ed       csi-hostpath-resizer-0
	764a5f36015a2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   12 minutes ago       Running             csi-external-health-monitor-controller   0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	5e03ecec68932       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             12 minutes ago       Running             csi-attacher                             0                   04bee9af65b88       csi-hostpath-attacher-0
	2a9c9054db024       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago       Running             volume-snapshot-controller               0                   954881763f4d2       snapshot-controller-56fcc65765-xp8jb
	5189bf51dfe60       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago       Running             volume-snapshot-controller               0                   3a5a27bdb1e27       snapshot-controller-56fcc65765-ncqwr
	100fd02a1faf5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        12 minutes ago       Running             yakd                                     0                   aad214bb107e1       yakd-dashboard-67d98fc6b-j4j2x
	7df30468750a3       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago       Running             metrics-server                           0                   26d7d65f4a110       metrics-server-84c5f94fbc-l8xpt
	e6929e7afa035       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               12 minutes ago       Running             cloud-spanner-emulator                   0                   45d7b20be1819       cloud-spanner-emulator-5b584cc74-97lv7
	88b34955ceb18       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago       Running             local-path-provisioner                   0                   34f59459d9996       local-path-provisioner-86d989889c-r6cj8
	cc089ff435908       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             12 minutes ago       Exited              registry                                 0                   b877c8259724a       registry-66c9cd494c-xghlh
	9740e1ab45dff       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              12 minutes ago       Exited              registry-proxy                           0                   d6ea241113e50       registry-proxy-j2dg7
	71c8aef5c5c24       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     13 minutes ago       Running             nvidia-device-plugin-ctr                 0                   2b86e9d29eb33       nvidia-device-plugin-daemonset-rmgc2
	c98c33bab4e43       c69fa2e9cbf5f                                                                                                                                13 minutes ago       Running             coredns                                  0                   f681430aabf24       coredns-7c65d6cfc9-p5xcl
	045fad5ce6ab4       60c005f310ff3                                                                                                                                13 minutes ago       Running             kube-proxy                               0                   6e8a6bce97790       kube-proxy-275md
	a88800a1ce5b9       6e38f40d628db                                                                                                                                13 minutes ago       Running             storage-provisioner                      0                   e04842fad72fa       storage-provisioner
	e008cb9d44fcb       175ffd71cce3d                                                                                                                                13 minutes ago       Running             kube-controller-manager                  0                   2f63f87bd15d1       kube-controller-manager-ubuntu-20-agent-12
	cefe11af8e634       9aa1fad941575                                                                                                                                13 minutes ago       Running             kube-scheduler                           0                   3f8185d06efd3       kube-scheduler-ubuntu-20-agent-12
	98649c04ed191       6bab7719df100                                                                                                                                13 minutes ago       Running             kube-apiserver                           0                   60b7c561b6237       kube-apiserver-ubuntu-20-agent-12
	891452784bf9b       2e96e5913fc06                                                                                                                                13 minutes ago       Running             etcd                                     0                   087dc8c7c97f8       etcd-ubuntu-20-agent-12
	
	
	==> coredns [c98c33bab4e4] <==
	[INFO] 10.244.0.5:39130 - 49408 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011371s
	[INFO] 10.244.0.5:36683 - 40984 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092092s
	[INFO] 10.244.0.5:36683 - 54814 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177141s
	[INFO] 10.244.0.5:48486 - 28442 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000086929s
	[INFO] 10.244.0.5:48486 - 5406 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000127637s
	[INFO] 10.244.0.5:59402 - 60382 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000079785s
	[INFO] 10.244.0.5:59402 - 6106 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000100251s
	[INFO] 10.244.0.5:56367 - 45414 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00007586s
	[INFO] 10.244.0.5:56367 - 44632 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000107663s
	[INFO] 10.244.0.5:56779 - 21145 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071153s
	[INFO] 10.244.0.5:56779 - 17307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139638s
	[INFO] 10.244.0.5:50701 - 22008 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010586s
	[INFO] 10.244.0.5:50701 - 60925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136235s
	[INFO] 10.244.0.5:34160 - 49361 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079304s
	[INFO] 10.244.0.5:34160 - 47831 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000185735s
	[INFO] 10.244.0.5:46275 - 16771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008177s
	[INFO] 10.244.0.5:46275 - 49536 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108335s
	[INFO] 10.244.0.5:47968 - 20526 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.00008698s
	[INFO] 10.244.0.5:47968 - 10797 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000120657s
	[INFO] 10.244.0.5:37248 - 56533 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000080178s
	[INFO] 10.244.0.5:37248 - 45520 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000103163s
	[INFO] 10.244.0.5:39385 - 32664 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000082135s
	[INFO] 10.244.0.5:39385 - 56732 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000177006s
	[INFO] 10.244.0.5:37963 - 19331 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068935s
	[INFO] 10.244.0.5:37963 - 62598 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104055s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-12
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-12
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-12
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-12"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-12
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:49:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:47:46 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.128.15.239
	  Hostname:    ubuntu-20-agent-12
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                26e2d22b-def2-c216-b2a9-007020fa8ce7
	  Boot ID:                    83656df0-482a-417d-b7fc-90bc5fb37652
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-97lv7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gadget                      gadget-cc7cr                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-p5xcl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-2flxk                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ubuntu-20-agent-12                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-ubuntu-20-agent-12             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-12    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-275md                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ubuntu-20-agent-12             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-l8xpt               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         13m
	  kube-system                 nvidia-device-plugin-daemonset-rmgc2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-ncqwr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-xp8jb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-r6cj8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-admission-7f54bd7598-rfghv            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-admission-init-gh7z4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-controllers-5ff7c5d4db-529t5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-scheduler-79dc4b78bb-zdd4g            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-j4j2x                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 13m   kube-proxy       
	  Normal   Starting                 13m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m   node-controller  Node ubuntu-20-agent-12 event: Registered Node ubuntu-20-agent-12 in Controller
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 28 f8 d2 0a cd 08 06
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7e b8 fc 4c f3 9c 08 06
	[Sep23 10:36] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[ +10.128758] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 aa 9b fb 38 08 06
	[  +0.000410] IPv4: martian source 10.244.0.5 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[  +2.001125] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 02 27 ad 4b 0d 08 06
	[  +0.032532] IPv4: martian source 10.244.0.5 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e ed 25 59 75 f3 08 06
	[  +3.912883] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 62 ba d6 13 c3 e3 08 06
	[  +2.709643] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 66 31 90 37 c7 08 06
	[  +0.019221] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 1d 22 9e 8e 47 08 06
	[  +9.151781] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 ca ad 28 d8 56 08 06
	[  +0.348439] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 59 84 5e b0 7b 08 06
	[  +0.569834] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e c1 ff 28 29 42 08 06
	
	
	==> etcd [891452784bf9] <==
	{"level":"info","ts":"2024-09-23T10:36:28.599143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.599153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac received MsgVoteResp from dd041fa4dc6d4aac at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.599207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac became leader at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.599225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dd041fa4dc6d4aac elected leader dd041fa4dc6d4aac at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.600162Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.600816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:28.600810Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dd041fa4dc6d4aac","local-member-attributes":"{Name:ubuntu-20-agent-12 ClientURLs:[https://10.128.15.239:2379]}","request-path":"/0/members/dd041fa4dc6d4aac/attributes","cluster-id":"c05a044d5786a1e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:36:28.600843Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:28.600903Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c05a044d5786a1e7","local-member-id":"dd041fa4dc6d4aac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.600975Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601004Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601891Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602013Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.239:2379"}
	{"level":"info","ts":"2024-09-23T10:36:28.603219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:36:44.242056Z","caller":"traceutil/trace.go:171","msg":"trace[1467056625] linearizableReadLoop","detail":"{readStateIndex:849; appliedIndex:845; }","duration":"128.026224ms","start":"2024-09-23T10:36:44.114013Z","end":"2024-09-23T10:36:44.242039Z","steps":["trace[1467056625] 'read index received'  (duration: 46.430648ms)","trace[1467056625] 'applied index is now lower than readState.Index'  (duration: 81.594963ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:36:44.242093Z","caller":"traceutil/trace.go:171","msg":"trace[2126161537] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"134.824059ms","start":"2024-09-23T10:36:44.107242Z","end":"2024-09-23T10:36:44.242066Z","steps":["trace[2126161537] 'process raft request'  (duration: 123.210784ms)","trace[2126161537] 'compare'  (duration: 11.439426ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:36:44.242290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.188403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:36:44.242444Z","caller":"traceutil/trace.go:171","msg":"trace[1472265816] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:832; }","duration":"128.418389ms","start":"2024-09-23T10:36:44.114009Z","end":"2024-09-23T10:36:44.242428Z","steps":["trace[1472265816] 'agreement among raft nodes before linearized reading'  (duration: 128.138624ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:36:44.242340Z","caller":"traceutil/trace.go:171","msg":"trace[1535126050] transaction","detail":"{read_only:false; response_revision:832; number_of_response:1; }","duration":"133.407624ms","start":"2024-09-23T10:36:44.108904Z","end":"2024-09-23T10:36:44.242312Z","steps":["trace[1535126050] 'process raft request'  (duration: 133.085569ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:46:28.621172Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1493}
	{"level":"info","ts":"2024-09-23T10:46:28.644160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1493,"took":"22.540162ms","hash":974073395,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":3624960,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T10:46:28.644213Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":974073395,"revision":1493,"compact-revision":-1}
	
	
	==> kernel <==
	 10:49:46 up 1 day, 16:32,  0 users,  load average: 0.06, 0.22, 0.73
	Linux ubuntu-20-agent-12 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [98649c04ed19] <==
	W0923 10:46:47.581914       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:46:47.581915       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:47:39.908318       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:47:39.908367       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:47:39.910002       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:47:47.588246       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	W0923 10:47:47.588273       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:47:47.588299       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.588306       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:47:47.589909       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:47:47.589914       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:48:46.234430       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:48:46.234472       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate-sa.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:48:47.598478       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:48:47.598519       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:48:47.598479       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:48:47.598563       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:48:47.600240       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:48:47.600241       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:49:33.957840       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:49:33.957888       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:49:33.959569       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:49:45.354026       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:49:45.354068       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:49:45.355727       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	
	
	==> kube-controller-manager [e008cb9d44fc] <==
	E0923 10:45:47.572672       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:45:47.573917       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:45:47.573940       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.582512       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.582520       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.583692       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:46:47.583700       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 10:47:39.910608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="7.170343ms"
	E0923 10:47:39.910642       1 replica_set.go:560] "Unhandled Error" err="sync \"gcp-auth/gcp-auth-89d5ffd79\" failed with Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 10:47:46.737853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-12"
	E0923 10:47:47.590521       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.590574       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.591719       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:47:47.591731       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 10:48:03.173184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="64.04µs"
	I0923 10:48:04.173914       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="73.789µs"
	I0923 10:48:07.171864       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 10:48:16.172963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="63.638µs"
	I0923 10:48:19.171385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="65.324µs"
	I0923 10:48:22.173858       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	E0923 10:48:47.600991       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:48:47.601067       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:48:47.602321       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:48:47.602356       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 10:49:45.698924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="11.377µs"
	
	
	==> kube-proxy [045fad5ce6ab] <==
	I0923 10:36:38.573406       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:36:38.729619       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.239"]
	E0923 10:36:38.729768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:36:38.818441       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:36:38.818516       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:36:38.825889       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:36:38.826286       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:36:38.826330       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:36:38.829447       1 config.go:328] "Starting node config controller"
	I0923 10:36:38.829476       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:36:38.830499       1 config.go:199] "Starting service config controller"
	I0923 10:36:38.830549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:36:38.830606       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:36:38.830612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:36:38.931771       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:36:38.931860       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:36:38.938436       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cefe11af8e63] <==
	W0923 10:36:30.422004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:36:30.422053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.448133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:36:30.448193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.597590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:36:30.597642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.627316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.627362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.638928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:36:30.638980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.639681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:36:30.639714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.656288       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:36:30.656331       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:36:30.673851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.673901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.732651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.732705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.750217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:36:30.750269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.788871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.788927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.793547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:36:30.793590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:36:32.724371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 10:49:46 UTC. --
	Sep 23 10:48:58 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:58.163991 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:48:58 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:48:58.164062 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:49:06 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:06.162634 1590014 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-p5xcl" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:49:09 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:09.161923 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:49:09 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:09.162142 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	Sep 23 10:49:09 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:09.164247 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:49:10 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:10.164196 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:49:12 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:12.164162 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:49:21 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:21.163956 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:49:23 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:23.164052 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:49:24 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:24.162033 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:49:24 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:24.162242 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	Sep 23 10:49:25 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:25.164175 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:49:34 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:34.164333 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:49:36 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:36.164638 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:49:37 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:37.162481 1590014 scope.go:117] "RemoveContainer" containerID="3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165"
	Sep 23 10:49:37 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:37.162701 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-cc7cr_gadget(25f9725e-0663-4ecf-bd22-662c6d69802a)\"" pod="gadget/gadget-cc7cr" podUID="25f9725e-0663-4ecf-bd22-662c6d69802a"
	Sep 23 10:49:40 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:49:40.164149 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:49:46 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:46.092152 1590014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtgsx\" (UniqueName: \"kubernetes.io/projected/3805a0ce-c102-4a58-92fb-1845d803f30a-kube-api-access-mtgsx\") pod \"3805a0ce-c102-4a58-92fb-1845d803f30a\" (UID: \"3805a0ce-c102-4a58-92fb-1845d803f30a\") "
	Sep 23 10:49:46 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:46.094612 1590014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3805a0ce-c102-4a58-92fb-1845d803f30a-kube-api-access-mtgsx" (OuterVolumeSpecName: "kube-api-access-mtgsx") pod "3805a0ce-c102-4a58-92fb-1845d803f30a" (UID: "3805a0ce-c102-4a58-92fb-1845d803f30a"). InnerVolumeSpecName "kube-api-access-mtgsx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:49:46 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:46.192805 1590014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqb4n\" (UniqueName: \"kubernetes.io/projected/04db77a5-6d0f-40b1-b220-f94a39762520-kube-api-access-nqb4n\") pod \"04db77a5-6d0f-40b1-b220-f94a39762520\" (UID: \"04db77a5-6d0f-40b1-b220-f94a39762520\") "
	Sep 23 10:49:46 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:46.192953 1590014 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mtgsx\" (UniqueName: \"kubernetes.io/projected/3805a0ce-c102-4a58-92fb-1845d803f30a-kube-api-access-mtgsx\") on node \"ubuntu-20-agent-12\" DevicePath \"\""
	Sep 23 10:49:46 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:46.194917 1590014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04db77a5-6d0f-40b1-b220-f94a39762520-kube-api-access-nqb4n" (OuterVolumeSpecName: "kube-api-access-nqb4n") pod "04db77a5-6d0f-40b1-b220-f94a39762520" (UID: "04db77a5-6d0f-40b1-b220-f94a39762520"). InnerVolumeSpecName "kube-api-access-nqb4n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:49:46 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:46.294228 1590014 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nqb4n\" (UniqueName: \"kubernetes.io/projected/04db77a5-6d0f-40b1-b220-f94a39762520-kube-api-access-nqb4n\") on node \"ubuntu-20-agent-12\" DevicePath \"\""
	Sep 23 10:49:46 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:49:46.685591 1590014 scope.go:117] "RemoveContainer" containerID="cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77"
	
	
	==> storage-provisioner [a88800a1ce5b] <==
	I0923 10:36:38.418197       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:36:38.433696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:36:38.433749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:36:38.445674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:36:38.446763       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	I0923 10:36:38.449267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35a6bb7a-1e48-4bf9-816a-2d141c61bd81", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0 became leader
	I0923 10:36:38.547698       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1 (67.982181ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-rfghv" not found
	Error from server (NotFound): pods "volcano-admission-init-gh7z4" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-529t5" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-zdd4g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1
--- FAIL: TestAddons/parallel/Registry (11.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (371.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0923 10:50:03.020680 1584534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:50:03.025048 1584534 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:50:03.025073 1584534 kapi.go:107] duration metric: took 4.409809ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.418351ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:518: (dbg) Non-zero exit: kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml: exit status 1 (114.55154ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): error when creating "testdata/csi-hostpath-driver/pv-pod.yaml": Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:520: creating pod with kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml failed: exit status 1
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:523: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:523: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:523: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-09-23 10:56:13.490938633 +0000 UTC m=+1265.872511554
addons_test.go:524: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:42273               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:36 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:42 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | minikube addons                      | minikube | jenkins | v1.34.0 | 23 Sep 24 10:50 UTC | 23 Sep 24 10:50 UTC |
	|         | disable metrics-server               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:36:19
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:36:19.158069 1588554 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:36:19.158231 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158241 1588554 out.go:358] Setting ErrFile to fd 2...
	I0923 10:36:19.158245 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158464 1588554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	I0923 10:36:19.159125 1588554 out.go:352] Setting JSON to false
	I0923 10:36:19.160039 1588554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":145130,"bootTime":1726942649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:36:19.160160 1588554 start.go:139] virtualization: kvm guest
	I0923 10:36:19.162394 1588554 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:36:19.163650 1588554 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:36:19.163676 1588554 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 10:36:19.163732 1588554 notify.go:220] Checking for updates...
	I0923 10:36:19.166389 1588554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:36:19.167804 1588554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:19.169081 1588554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 10:36:19.170968 1588554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:36:19.172507 1588554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:36:19.174424 1588554 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:36:19.185459 1588554 out.go:177] * Using the none driver based on user configuration
	I0923 10:36:19.186681 1588554 start.go:297] selected driver: none
	I0923 10:36:19.186694 1588554 start.go:901] validating driver "none" against <nil>
	I0923 10:36:19.186706 1588554 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:36:19.186759 1588554 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 10:36:19.187052 1588554 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0923 10:36:19.187561 1588554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:36:19.187804 1588554 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:19.187836 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:19.187883 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:19.187891 1588554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:36:19.187950 1588554 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:19.190491 1588554 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 10:36:19.192247 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.192296 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json: {Name:mk0db601d978f1f6b111e723fd0658218dee1a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:19.192505 1588554 start.go:360] acquireMachinesLock for minikube: {Name:mka47a0638fa8ca4d22f1fa46c51878d308fb6cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:36:19.192555 1588554 start.go:364] duration metric: took 26.854µs to acquireMachinesLock for "minikube"
	I0923 10:36:19.192576 1588554 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:19.192689 1588554 start.go:125] createHost starting for "" (driver="none")
	I0923 10:36:19.194985 1588554 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0923 10:36:19.196198 1588554 exec_runner.go:51] Run: systemctl --version
	I0923 10:36:19.198807 1588554 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0923 10:36:19.198844 1588554 client.go:168] LocalClient.Create starting
	I0923 10:36:19.198929 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem
	I0923 10:36:19.198967 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.198986 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199033 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem
	I0923 10:36:19.199052 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.199065 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199430 1588554 client.go:171] duration metric: took 577.868µs to LocalClient.Create
	I0923 10:36:19.199455 1588554 start.go:167] duration metric: took 651.01µs to libmachine.API.Create "minikube"
	I0923 10:36:19.199461 1588554 start.go:293] postStartSetup for "minikube" (driver="none")
	I0923 10:36:19.199503 1588554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:36:19.199539 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:36:19.209126 1588554 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:36:19.209149 1588554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:36:19.209157 1588554 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:36:19.210966 1588554 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0923 10:36:19.212083 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/addons for local assets ...
	I0923 10:36:19.212135 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/files for local assets ...
	I0923 10:36:19.212155 1588554 start.go:296] duration metric: took 12.687054ms for postStartSetup
	I0923 10:36:19.212795 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.212933 1588554 start.go:128] duration metric: took 20.232501ms to createHost
	I0923 10:36:19.212946 1588554 start.go:83] releasing machines lock for "minikube", held for 20.378727ms
	I0923 10:36:19.213290 1588554 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:36:19.213405 1588554 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0923 10:36:19.215275 1588554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:36:19.215410 1588554 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:36:19.225131 1588554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 10:36:19.225172 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.225207 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.225324 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.246269 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:36:19.256037 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:36:19.265994 1588554 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:36:19.266081 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:36:19.276368 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.286490 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:36:19.297389 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.307066 1588554 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:36:19.316656 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:36:19.326288 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:36:19.336363 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:36:19.346290 1588554 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:36:19.355338 1588554 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:36:19.364071 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:19.577952 1588554 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0923 10:36:19.651036 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.651102 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.651252 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.672247 1588554 exec_runner.go:51] Run: which cri-dockerd
	I0923 10:36:19.673216 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:36:19.681044 1588554 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0923 10:36:19.681067 1588554 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.681103 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.689425 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:36:19.689591 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4059772120 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.698668 1588554 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0923 10:36:19.932327 1588554 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0923 10:36:20.150083 1588554 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:36:20.150282 1588554 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0923 10:36:20.150300 1588554 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0923 10:36:20.150338 1588554 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0923 10:36:20.158569 1588554 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:36:20.158734 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2996454661 /etc/docker/daemon.json
	I0923 10:36:20.168354 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:20.379218 1588554 exec_runner.go:51] Run: sudo systemctl restart docker
	I0923 10:36:20.693080 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:36:20.705085 1588554 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0923 10:36:20.723552 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:20.735597 1588554 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:36:20.953725 1588554 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0923 10:36:21.177941 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.410173 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0923 10:36:21.423706 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:21.435794 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.688698 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0923 10:36:21.764452 1588554 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:36:21.764538 1588554 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0923 10:36:21.765977 1588554 start.go:563] Will wait 60s for crictl version
	I0923 10:36:21.766041 1588554 exec_runner.go:51] Run: which crictl
	I0923 10:36:21.767183 1588554 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0923 10:36:21.799990 1588554 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 10:36:21.800066 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.821449 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.845424 1588554 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 10:36:21.845506 1588554 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0923 10:36:21.848567 1588554 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0923 10:36:21.850015 1588554 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:36:21.850144 1588554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:36:21.850155 1588554 kubeadm.go:934] updating node { 10.128.15.239 8443 v1.31.1 docker true true} ...
	I0923 10:36:21.850253 1588554 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-12 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.239 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0923 10:36:21.850310 1588554 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0923 10:36:21.901691 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:21.901719 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:21.901730 1588554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:36:21.901755 1588554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.239 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-12 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:36:21.901910 1588554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-12"
	  kubeletExtraArgs:
	    node-ip: 10.128.15.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:36:21.901970 1588554 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.910706 1588554 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:36:21.910760 1588554 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.918867 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:36:21.918878 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:36:21.918874 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:36:21.919007 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:21.931740 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:36:21.973404 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2218285672 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:36:21.975632 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube621796612 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:36:22.005095 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3553074774 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:36:22.078082 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:36:22.087582 1588554 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0923 10:36:22.087606 1588554 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.087647 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.095444 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0923 10:36:22.095602 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4110124182 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.105645 1588554 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0923 10:36:22.105666 1588554 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0923 10:36:22.105700 1588554 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0923 10:36:22.113822 1588554 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:36:22.114022 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3324119727 /lib/systemd/system/kubelet.service
	I0923 10:36:22.123427 1588554 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0923 10:36:22.123598 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3318915681 /var/tmp/minikube/kubeadm.yaml.new
	I0923 10:36:22.131907 1588554 exec_runner.go:51] Run: grep 10.128.15.239	control-plane.minikube.internal$ /etc/hosts
	I0923 10:36:22.133649 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:22.363463 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:22.378439 1588554 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube for IP: 10.128.15.239
	I0923 10:36:22.378459 1588554 certs.go:194] generating shared ca certs ...
	I0923 10:36:22.378479 1588554 certs.go:226] acquiring lock for ca certs: {Name:mk757d3be8cf2fb32b8856d4b5e3173183901a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.378637 1588554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key
	I0923 10:36:22.378678 1588554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key
	I0923 10:36:22.378687 1588554 certs.go:256] generating profile certs ...
	I0923 10:36:22.378744 1588554 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key
	I0923 10:36:22.378763 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt with IP's: []
	I0923 10:36:22.592011 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt ...
	I0923 10:36:22.592085 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt: {Name:mk1bdb710d99b77b32099c81dc261479f881a61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592249 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key ...
	I0923 10:36:22.592262 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key: {Name:mk990e2a3a19cc03d4722edbfa635f5e467b2b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592353 1588554 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83
	I0923 10:36:22.592371 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.239]
	I0923 10:36:22.826429 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 ...
	I0923 10:36:22.826468 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83: {Name:mkdaa76b99a75fc999a744f15c5aa0e73646ad27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826632 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 ...
	I0923 10:36:22.826650 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83: {Name:mk5c84f7ccec239df3b3f71560e288a437b89d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826728 1588554 certs.go:381] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt
	I0923 10:36:22.826837 1588554 certs.go:385] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key
	I0923 10:36:22.826896 1588554 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key
	I0923 10:36:22.826913 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0923 10:36:22.988376 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt ...
	I0923 10:36:22.988415 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt: {Name:mk1a79d5dbe06be337e3230425d1c5cb0b5c9c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988572 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key ...
	I0923 10:36:22.988587 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key: {Name:mk7f2be748011aa06064cd625f3afbd5fec49aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988800 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:36:22.988842 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:36:22.988874 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:36:22.988896 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/key.pem (1675 bytes)
	I0923 10:36:22.989638 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:36:22.989763 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube32048499 /var/lib/minikube/certs/ca.crt
	I0923 10:36:22.999482 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 10:36:22.999627 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2462737595 /var/lib/minikube/certs/ca.key
	I0923 10:36:23.008271 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:36:23.008403 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2315409218 /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:36:23.016619 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:36:23.016796 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2778680620 /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:36:23.026283 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0923 10:36:23.026429 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563673913 /var/lib/minikube/certs/apiserver.crt
	I0923 10:36:23.034367 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:36:23.034559 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1327376112 /var/lib/minikube/certs/apiserver.key
	I0923 10:36:23.043236 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:36:23.043385 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3861098534 /var/lib/minikube/certs/proxy-client.crt
	I0923 10:36:23.053261 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:36:23.053393 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1865989171 /var/lib/minikube/certs/proxy-client.key
	I0923 10:36:23.062749 1588554 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0923 10:36:23.062771 1588554 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.062810 1588554 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.070407 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:36:23.070572 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2921020744 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.078922 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:36:23.079082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1931847277 /var/lib/minikube/kubeconfig
	I0923 10:36:23.087191 1588554 exec_runner.go:51] Run: openssl version
	I0923 10:36:23.090067 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:36:23.098811 1588554 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100243 1588554 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100280 1588554 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.103237 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:36:23.112696 1588554 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:36:23.113952 1588554 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:36:23.113993 1588554 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:23.114121 1588554 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:36:23.130863 1588554 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:36:23.141170 1588554 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:36:23.154896 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:23.177871 1588554 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:36:23.186183 1588554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:36:23.186207 1588554 kubeadm.go:157] found existing configuration files:
	
	I0923 10:36:23.186251 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:36:23.195211 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:36:23.195272 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:36:23.203608 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:36:23.212052 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:36:23.212118 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:36:23.220697 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.231762 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:36:23.231826 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.239886 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:36:23.250151 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:36:23.250215 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:36:23.257852 1588554 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:36:23.292982 1588554 kubeadm.go:310] W0923 10:36:23.292852 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.293485 1588554 kubeadm.go:310] W0923 10:36:23.293445 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.295381 1588554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:36:23.295429 1588554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:36:23.388509 1588554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:36:23.388613 1588554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:36:23.388622 1588554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:36:23.388626 1588554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:36:23.400110 1588554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:36:23.403660 1588554 out.go:235]   - Generating certificates and keys ...
	I0923 10:36:23.403706 1588554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:36:23.403719 1588554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:36:23.479635 1588554 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:36:23.612116 1588554 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:36:23.692069 1588554 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:36:23.926999 1588554 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:36:24.011480 1588554 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:36:24.011600 1588554 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.104614 1588554 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:36:24.104769 1588554 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.304540 1588554 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:36:24.538700 1588554 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:36:24.615897 1588554 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:36:24.616110 1588554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:36:24.791653 1588554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:36:24.910277 1588554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:36:25.215908 1588554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:36:25.289127 1588554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:36:25.490254 1588554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:36:25.490804 1588554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:36:25.493193 1588554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:36:25.495266 1588554 out.go:235]   - Booting up control plane ...
	I0923 10:36:25.495299 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:36:25.495318 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:36:25.495739 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:36:25.515279 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:36:25.519949 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:36:25.519979 1588554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:36:25.765044 1588554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:36:25.765080 1588554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:36:26.266756 1588554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690653ms
	I0923 10:36:26.266797 1588554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:36:31.268595 1588554 kubeadm.go:310] [api-check] The API server is healthy after 5.001820679s
	I0923 10:36:31.279620 1588554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:36:31.290992 1588554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:36:31.308130 1588554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:36:31.308158 1588554 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-12 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:36:31.315634 1588554 kubeadm.go:310] [bootstrap-token] Using token: vj37sq.3v8d1kp1945z41wj
	I0923 10:36:31.316963 1588554 out.go:235]   - Configuring RBAC rules ...
	I0923 10:36:31.317008 1588554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:36:31.320391 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:36:31.328142 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:36:31.330741 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:36:31.333381 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:36:31.335890 1588554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:36:31.675856 1588554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:36:32.106847 1588554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:36:32.674219 1588554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:36:32.675126 1588554 kubeadm.go:310] 
	I0923 10:36:32.675137 1588554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:36:32.675141 1588554 kubeadm.go:310] 
	I0923 10:36:32.675148 1588554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:36:32.675152 1588554 kubeadm.go:310] 
	I0923 10:36:32.675156 1588554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:36:32.675160 1588554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:36:32.675164 1588554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:36:32.675171 1588554 kubeadm.go:310] 
	I0923 10:36:32.675175 1588554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:36:32.675179 1588554 kubeadm.go:310] 
	I0923 10:36:32.675184 1588554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:36:32.675188 1588554 kubeadm.go:310] 
	I0923 10:36:32.675192 1588554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:36:32.675196 1588554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:36:32.675207 1588554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:36:32.675211 1588554 kubeadm.go:310] 
	I0923 10:36:32.675217 1588554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:36:32.675221 1588554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:36:32.675225 1588554 kubeadm.go:310] 
	I0923 10:36:32.675228 1588554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675233 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c \
	I0923 10:36:32.675237 1588554 kubeadm.go:310] 	--control-plane 
	I0923 10:36:32.675242 1588554 kubeadm.go:310] 
	I0923 10:36:32.675246 1588554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:36:32.675252 1588554 kubeadm.go:310] 
	I0923 10:36:32.675255 1588554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675258 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c 
	I0923 10:36:32.679087 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:32.679120 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:32.680982 1588554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:36:32.682253 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:36:32.692879 1588554 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:36:32.693059 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3098274276 /etc/cni/net.d/1-k8s.conflist
	I0923 10:36:32.704393 1588554 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:36:32.704473 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:32.704510 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-12 minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0923 10:36:32.713564 1588554 ops.go:34] apiserver oom_adj: -16
	I0923 10:36:32.777699 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.277929 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.778034 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.278552 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.777937 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.278677 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.777756 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.278547 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.343720 1588554 kubeadm.go:1113] duration metric: took 3.63930993s to wait for elevateKubeSystemPrivileges
	I0923 10:36:36.343761 1588554 kubeadm.go:394] duration metric: took 13.229771538s to StartCluster
	I0923 10:36:36.343783 1588554 settings.go:142] acquiring lock: {Name:mkf413d2c932a8f45f91708eee4886fc43a35e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.343846 1588554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:36.344451 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/kubeconfig: {Name:mk42cd91ee317759dd4ab26721004c644d4d46c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.344664 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:36:36.344755 1588554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:36:36.344891 1588554 addons.go:69] Setting yakd=true in profile "minikube"
	I0923 10:36:36.344910 1588554 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0923 10:36:36.344913 1588554 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0923 10:36:36.344939 1588554 addons.go:69] Setting registry=true in profile "minikube"
	I0923 10:36:36.344931 1588554 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0923 10:36:36.344946 1588554 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0923 10:36:36.344964 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0923 10:36:36.344976 1588554 addons.go:234] Setting addon registry=true in "minikube"
	I0923 10:36:36.344980 1588554 mustload.go:65] Loading cluster: minikube
	I0923 10:36:36.344979 1588554 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0923 10:36:36.344992 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.344990 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345000 1588554 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0923 10:36:36.345005 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345031 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345045 1588554 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0923 10:36:36.345072 1588554 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0923 10:36:36.345087 1588554 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0923 10:36:36.345088 1588554 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:36.345104 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345114 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345179 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345317 1588554 addons.go:69] Setting volcano=true in profile "minikube"
	I0923 10:36:36.345335 1588554 addons.go:234] Setting addon volcano=true in "minikube"
	I0923 10:36:36.345361 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345658 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345675 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345680 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345690 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345758 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345762 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345775 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345780 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345807 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345824 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345827 1588554 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0923 10:36:36.345827 1588554 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0923 10:36:36.344919 1588554 addons.go:234] Setting addon yakd=true in "minikube"
	I0923 10:36:36.345839 1588554 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0923 10:36:36.345843 1588554 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0923 10:36:36.344930 1588554 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0923 10:36:36.345858 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345860 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345874 1588554 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0923 10:36:36.345918 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345811 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346177 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346191 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346221 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346328 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346342 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346371 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346524 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346536 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346550 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.346579 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346655 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346673 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346705 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345810 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346539 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347192 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.347221 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347233 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347253 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347284 1588554 out.go:177] * Configuring local host environment ...
	I0923 10:36:36.345829 1588554 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0923 10:36:36.347650 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0923 10:36:36.348407 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.348430 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.348463 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0923 10:36:36.348690 1588554 out.go:270] * 
	W0923 10:36:36.348780 1588554 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0923 10:36:36.348809 1588554 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0923 10:36:36.348865 1588554 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0923 10:36:36.348897 1588554 out.go:270] * 
	W0923 10:36:36.348999 1588554 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0923 10:36:36.349040 1588554 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0923 10:36:36.349080 1588554 out.go:270] * 
	W0923 10:36:36.349130 1588554 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0923 10:36:36.349173 1588554 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0923 10:36:36.349199 1588554 out.go:270] * 
	W0923 10:36:36.349236 1588554 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0923 10:36:36.349282 1588554 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:36.345810 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.350050 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.350088 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.350710 1588554 out.go:177] * Verifying Kubernetes components...
	I0923 10:36:36.352239 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:36.369581 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.369720 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.370463 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.371382 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.373298 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.379392 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.383028 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385097 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385628 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.385693 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.389742 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.389782 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.389793 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402210 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402325 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402356 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402407 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402488 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402530 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402557 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402328 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.406952 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.406987 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.407339 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.407394 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.414599 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.414632 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.415393 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415455 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.415667 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415722 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.417736 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.417799 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.420551 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.420602 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.421969 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.421994 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.422984 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423319 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423344 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.424659 1588554 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:36:36.424874 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.424899 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.428268 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.428559 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.430071 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:36:36.430076 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:36:36.430207 1588554 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:36:36.431382 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431427 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:36:36.431585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube840197264 /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431790 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.431815 1588554 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:36:36.431987 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1728725482 /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.433518 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:36:36.434702 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:36:36.435367 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.435397 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.436902 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:36:36.438150 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:36:36.439277 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.439337 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.440540 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.440996 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:36:36.442010 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:36:36.442071 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.442098 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.442561 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.442772 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443079 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.443136 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443350 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.443375 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.443492 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443518 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0923 10:36:36.443525 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443566 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443844 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.443879 1588554 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:36:36.444008 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4228746672 /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.444580 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:36:36.446035 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:36:36.446930 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.446950 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.447416 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.448168 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448190 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.448643 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448661 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.449758 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 10:36:36.449765 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.449802 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:36:36.449942 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4141716628 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.452784 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.452686 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 10:36:36.454911 1588554 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:36:36.454973 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.455634 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.456554 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.457231 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.457268 1588554 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:36:36.457428 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1881343942 /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.458064 1588554 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458100 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:36:36.458238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2629288326 /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458427 1588554 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:36:36.458490 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 10:36:36.458554 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:36:36.458748 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.459583 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.459224 1588554 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0923 10:36:36.459875 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.459904 1588554 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.459934 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:36:36.460073 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172599530 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.460516 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.460548 1588554 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:36:36.460695 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1059056177 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.462006 1588554 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.462043 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 10:36:36.462614 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3721652212 /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.464913 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.464936 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.464972 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.467000 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.472480 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:36:36.473238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube726889991 /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.480760 1588554 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0923 10:36:36.480939 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.485106 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.485141 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.485190 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.487844 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.487878 1588554 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:36:36.488012 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601575597 /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.489111 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.491189 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.491220 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:36:36.491369 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3194307137 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.492639 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.492667 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494194 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.494218 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494867 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.498982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.499389 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.500765 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.500800 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:36:36.500956 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1985750997 /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.501929 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.503522 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.507731 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.507981 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:36:36.508221 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2102644874 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.508499 1588554 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:36:36.508667 1588554 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:36:36.509791 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.509885 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:36:36.510186 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.510211 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:36:36.510259 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510276 1588554 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:36:36.510535 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2223790766 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510687 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2284125594 /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.511165 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172030255 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.518843 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.518932 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.519210 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.519243 1588554 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:36:36.519417 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2081246003 /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.527052 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.530307 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.531182 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.531199 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.531224 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:36:36.531366 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2359416048 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.534852 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.534897 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.534862 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.534931 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.534930 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:36:36.534953 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:36:36.535115 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube169766603 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.535148 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube873661914 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.540683 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.547811 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:36:36.548029 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.548063 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:36:36.548238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube411864712 /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.553057 1588554 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:36:36.555188 1588554 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:36:36.555273 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.555312 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:36:36.555486 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4206261347 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.562063 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.562124 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:36:36.562318 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918834683 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.563155 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.563195 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:36:36.563361 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2570607285 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.568213 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.568257 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:36:36.568398 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393911802 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.571999 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.572033 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:36:36.572185 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2353575520 /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.577466 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.577543 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.587661 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.598560 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.598607 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:36:36.598954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2771751730 /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.603217 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.603313 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:36:36.603600 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4069496750 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.604133 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604165 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:36:36.604308 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1964334193 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604545 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.604574 1588554 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:36:36.604700 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2583663156 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.610522 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.610602 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.615633 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.616448 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.616504 1588554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616528 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0923 10:36:36.616540 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616587 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.633448 1588554 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.633487 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:36:36.633636 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3570026092 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.637790 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.637820 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:36:36.637954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2992782773 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.646982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.677372 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.679555 1588554 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:36:36.679857 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4202431507 /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.688839 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.688874 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:36:36.689001 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube389006966 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.693416 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.693456 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:36:36.693585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951849839 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.738946 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.774333 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.774371 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:36:36.774529 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1226040952 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.785891 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.785936 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:36:36.786131 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1330733841 /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.796363 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:36.807897 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.807939 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:36:36.808082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube111334727 /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.814837 1588554 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818242 1588554 node_ready.go:49] node "ubuntu-20-agent-12" has status "Ready":"True"
	I0923 10:36:36.818281 1588554 node_ready.go:38] duration metric: took 3.403871ms for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818293 1588554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:36.823705 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.828322 1588554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:36.832595 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.832627 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:36:36.832974 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1712125769 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.870153 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.870197 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:36:36.870386 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2973576979 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.926104 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.926143 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:36:36.926289 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2280122930 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.938896 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.938934 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:36:36.939070 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1690561903 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.950670 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:37.100928 1588554 addons.go:475] Verifying addon registry=true in "minikube"
	I0923 10:36:37.102814 1588554 out.go:177] * Verifying registry addon...
	I0923 10:36:37.112453 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:36:37.120259 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.187559 1588554 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0923 10:36:37.634285 1588554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.634317 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:37.695664 1588554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0923 10:36:37.724258 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.07719175s)
	I0923 10:36:37.724301 1588554 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0923 10:36:37.739850 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.124159231s)
	I0923 10:36:37.742561 1588554 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0923 10:36:37.849519 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.025767323s)
	I0923 10:36:38.120128 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.376349 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.69890606s)
	W0923 10:36:38.376406 1588554 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.376435 1588554 retry.go:31] will retry after 154.227647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.532717 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:38.617615 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.835917 1588554 pod_ready.go:103] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:39.116010 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.531492 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.580742626s)
	I0923 10:36:39.531534 1588554 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:39.537060 1588554 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:36:39.539558 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:36:39.547478 1588554 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:36:39.547508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:39.616393 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.677521 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.146291745s)
	I0923 10:36:40.048496 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.116802 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.545476 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.617107 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.834321 1588554 pod_ready.go:93] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:40.834347 1588554 pod_ready.go:82] duration metric: took 4.005994703s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:40.834359 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:41.044378 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.144560 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:41.351204 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.818400841s)
	I0923 10:36:41.545380 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.616429 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.044309 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.116963 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.545513 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.616637 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.841366 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:43.045300 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.116762 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:43.431875 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:36:43.432127 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3269004284 /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.445163 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:36:43.445319 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3403460145 /var/lib/minikube/google_cloud_project
	I0923 10:36:43.457431 1588554 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0923 10:36:43.457499 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:43.458127 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:43.458149 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:43.458181 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:43.479053 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:43.491340 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:43.491424 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:43.503388 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:43.503426 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:43.508517 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:43.508577 1588554 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.511610 1588554 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:36:43.513346 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:36:43.514725 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.514758 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:36:43.514881 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube616037526 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.525139 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.525184 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:36:43.525334 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3406397122 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.536623 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.536656 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:36:43.536845 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3654027324 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.544627 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.548001 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.616664 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.106662 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.245172 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.462186 1588554 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0923 10:36:44.463828 1588554 out.go:177] * Verifying gcp-auth addon...
	I0923 10:36:44.466561 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:36:44.469735 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:36:44.571760 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.616121 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.045508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.116582 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.342074 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:45.544902 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.617645 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.044759 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.117793 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.546485 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.616891 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.840864 1588554 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.840888 1588554 pod_ready.go:82] duration metric: took 6.006520139s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.840899 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846458 1588554 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.846487 1588554 pod_ready.go:82] duration metric: took 5.579842ms for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846499 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.850991 1588554 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.851013 1588554 pod_ready.go:82] duration metric: took 4.506621ms for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.851020 1588554 pod_ready.go:39] duration metric: took 10.032714922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:46.851040 1588554 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:36:46.851099 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:46.875129 1588554 api_server.go:72] duration metric: took 10.525769516s to wait for apiserver process to appear ...
	I0923 10:36:46.875164 1588554 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:36:46.875191 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:46.879815 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:46.880904 1588554 api_server.go:141] control plane version: v1.31.1
	I0923 10:36:46.880933 1588554 api_server.go:131] duration metric: took 5.761723ms to wait for apiserver health ...
	I0923 10:36:46.880944 1588554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:36:46.889660 1588554 system_pods.go:59] 16 kube-system pods found
	I0923 10:36:46.889699 1588554 system_pods.go:61] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.889712 1588554 system_pods.go:61] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.889722 1588554 system_pods.go:61] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.889739 1588554 system_pods.go:61] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.889746 1588554 system_pods.go:61] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.889752 1588554 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.889759 1588554 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.889765 1588554 system_pods.go:61] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.889770 1588554 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.889777 1588554 system_pods.go:61] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.889783 1588554 system_pods.go:61] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.889793 1588554 system_pods.go:61] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.889800 1588554 system_pods.go:61] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.889810 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889821 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889826 1588554 system_pods.go:61] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.889835 1588554 system_pods.go:74] duration metric: took 8.88361ms to wait for pod list to return data ...
	I0923 10:36:46.889844 1588554 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:36:46.892857 1588554 default_sa.go:45] found service account: "default"
	I0923 10:36:46.892882 1588554 default_sa.go:55] duration metric: took 3.031168ms for default service account to be created ...
	I0923 10:36:46.892893 1588554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:36:46.901634 1588554 system_pods.go:86] 16 kube-system pods found
	I0923 10:36:46.901674 1588554 system_pods.go:89] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.901688 1588554 system_pods.go:89] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.901699 1588554 system_pods.go:89] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.901714 1588554 system_pods.go:89] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.901725 1588554 system_pods.go:89] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.901732 1588554 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.901741 1588554 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.901747 1588554 system_pods.go:89] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.901753 1588554 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.901767 1588554 system_pods.go:89] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.901776 1588554 system_pods.go:89] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.901784 1588554 system_pods.go:89] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.901790 1588554 system_pods.go:89] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.901801 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901810 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901814 1588554 system_pods.go:89] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.901824 1588554 system_pods.go:126] duration metric: took 8.925234ms to wait for k8s-apps to be running ...
	I0923 10:36:46.901834 1588554 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:36:46.901887 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:46.916755 1588554 system_svc.go:56] duration metric: took 14.881074ms WaitForService to wait for kubelet
	I0923 10:36:46.916789 1588554 kubeadm.go:582] duration metric: took 10.567438885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:46.916809 1588554 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:36:46.920579 1588554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:36:46.920616 1588554 node_conditions.go:123] node cpu capacity is 8
	I0923 10:36:46.920632 1588554 node_conditions.go:105] duration metric: took 3.817539ms to run NodePressure ...
	I0923 10:36:46.920648 1588554 start.go:241] waiting for startup goroutines ...
	I0923 10:36:47.045158 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.117155 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:47.572416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.616622 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.045426 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.116767 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.573214 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.616845 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.044221 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.117209 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.543831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.615831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.044752 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.117047 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.572160 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.617157 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:51.045029 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:51.116892 1588554 kapi.go:107] duration metric: took 14.004458573s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:36:51.571831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.044681 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.544488 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.071964 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.544286 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.044362 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.572181 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.073837 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.544285 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.044544 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.545079 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.044265 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.544710 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.074493 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.544754 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.044416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.545731 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.044364 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.545006 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.043696 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.544143 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.044850 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.544007 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.073713 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.544432 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.044116 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.544249 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.084663 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.545630 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.073711 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.545674 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.074336 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.573379 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.072260 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.573326 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.046665 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.572302 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:10.044323 1588554 kapi.go:107] duration metric: took 30.504755495s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:42:44.467839 1588554 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 10:42:44.467877 1588554 kapi.go:107] duration metric: took 6m0.001323817s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 10:42:44.467989 1588554 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 10:42:44.469896 1588554 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver
	I0923 10:42:44.471562 1588554 addons.go:510] duration metric: took 6m8.126806783s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver]
	I0923 10:42:44.471618 1588554 start.go:246] waiting for cluster config update ...
	I0923 10:42:44.471643 1588554 start.go:255] writing updated cluster config ...
	I0923 10:42:44.471977 1588554 exec_runner.go:51] Run: rm -f paused
	I0923 10:42:44.523125 1588554 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:42:44.524945 1588554 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 10:56:13 UTC. --
	Sep 23 10:42:50 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:50Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:42:59 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:42:59Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:43:00 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:43:00.881341653Z" level=info msg="ignoring event" container=479fe5cc32913c30ee1f61f86ce466c10554b176126704459014bdbdced160af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:47:49 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:49Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:47:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:52Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:47:53 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:47:53Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:48:04 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:48:04Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540680915Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540684219Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.542670843Z" level=error msg="Error running exec 5fd2d79e980950ca565c3a912c8440ea08719c5a16c1780c5869c00f977ccd0f in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=5608c228de976ea9 traceID=04969482329070952bf3db909444f8ca
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.744401240Z" level=info msg="ignoring event" container=3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.739922067Z" level=info msg="ignoring event" container=cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.812004030Z" level=info msg="ignoring event" container=9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.882744210Z" level=info msg="ignoring event" container=b877c8259724a59128251b16cfbdf29c388b2ab853f4a4a08190f60af4e3434d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.988558076Z" level=info msg="ignoring event" container=d6ea241113e500cf3b405d989c416e01c0bc41267ce5bffed361a01c11edbd21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="error getting RW layer size for container ID 'cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77': Error response from daemon: No such container: cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77'"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="error getting RW layer size for container ID '9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd': Error response from daemon: No such container: 9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd'"
	Sep 23 10:49:52 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:52.396588609Z" level=info msg="ignoring event" container=f44622d46ba2ff4fa5093d028c0d993d004a691db3525cf78779461bd1b6a21f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:50:04 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:50:04.012648513Z" level=info msg="ignoring event" container=7df30468750a3330ba5db4cc23ff317ad04892789778ac43bcf58194a92677f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:50:04 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:50:04.142232385Z" level=info msg="ignoring event" container=26d7d65f4a1100216cd9a8d9613b9d25ba9e84b925315943e951ae668a77c600 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:52:56 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:52:56Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:53:01 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:53:01Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:53:01 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:53:01Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	1c0aec03476e1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          19 minutes ago      Running             csi-snapshotter                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	f22e4f1571647       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          19 minutes ago      Running             csi-provisioner                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	b43acbe9c46ae       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            19 minutes ago      Running             liveness-probe                           0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	80af8a926afc3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           19 minutes ago      Running             hostpath                                 0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	6f57e7ad00a9e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                19 minutes ago      Running             node-driver-registrar                    0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	369c356333963       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              19 minutes ago      Running             csi-resizer                              0                   83f21cc9148ed       csi-hostpath-resizer-0
	764a5f36015a2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   19 minutes ago      Running             csi-external-health-monitor-controller   0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	5e03ecec68932       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             19 minutes ago      Running             csi-attacher                             0                   04bee9af65b88       csi-hostpath-attacher-0
	2a9c9054db024       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      19 minutes ago      Running             volume-snapshot-controller               0                   954881763f4d2       snapshot-controller-56fcc65765-xp8jb
	5189bf51dfe60       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      19 minutes ago      Running             volume-snapshot-controller               0                   3a5a27bdb1e27       snapshot-controller-56fcc65765-ncqwr
	100fd02a1faf5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        19 minutes ago      Running             yakd                                     0                   aad214bb107e1       yakd-dashboard-67d98fc6b-j4j2x
	e6929e7afa035       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               19 minutes ago      Running             cloud-spanner-emulator                   0                   45d7b20be1819       cloud-spanner-emulator-5b584cc74-97lv7
	88b34955ceb18       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       19 minutes ago      Running             local-path-provisioner                   0                   34f59459d9996       local-path-provisioner-86d989889c-r6cj8
	71c8aef5c5c24       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     19 minutes ago      Running             nvidia-device-plugin-ctr                 0                   2b86e9d29eb33       nvidia-device-plugin-daemonset-rmgc2
	c98c33bab4e43       c69fa2e9cbf5f                                                                                                                                19 minutes ago      Running             coredns                                  0                   f681430aabf24       coredns-7c65d6cfc9-p5xcl
	045fad5ce6ab4       60c005f310ff3                                                                                                                                19 minutes ago      Running             kube-proxy                               0                   6e8a6bce97790       kube-proxy-275md
	a88800a1ce5b9       6e38f40d628db                                                                                                                                19 minutes ago      Running             storage-provisioner                      0                   e04842fad72fa       storage-provisioner
	e008cb9d44fcb       175ffd71cce3d                                                                                                                                19 minutes ago      Running             kube-controller-manager                  0                   2f63f87bd15d1       kube-controller-manager-ubuntu-20-agent-12
	cefe11af8e634       9aa1fad941575                                                                                                                                19 minutes ago      Running             kube-scheduler                           0                   3f8185d06efd3       kube-scheduler-ubuntu-20-agent-12
	98649c04ed191       6bab7719df100                                                                                                                                19 minutes ago      Running             kube-apiserver                           0                   60b7c561b6237       kube-apiserver-ubuntu-20-agent-12
	891452784bf9b       2e96e5913fc06                                                                                                                                19 minutes ago      Running             etcd                                     0                   087dc8c7c97f8       etcd-ubuntu-20-agent-12
	
	
	==> coredns [c98c33bab4e4] <==
	[INFO] 10.244.0.5:39130 - 49408 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011371s
	[INFO] 10.244.0.5:36683 - 40984 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092092s
	[INFO] 10.244.0.5:36683 - 54814 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177141s
	[INFO] 10.244.0.5:48486 - 28442 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000086929s
	[INFO] 10.244.0.5:48486 - 5406 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000127637s
	[INFO] 10.244.0.5:59402 - 60382 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000079785s
	[INFO] 10.244.0.5:59402 - 6106 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000100251s
	[INFO] 10.244.0.5:56367 - 45414 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00007586s
	[INFO] 10.244.0.5:56367 - 44632 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000107663s
	[INFO] 10.244.0.5:56779 - 21145 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071153s
	[INFO] 10.244.0.5:56779 - 17307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139638s
	[INFO] 10.244.0.5:50701 - 22008 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010586s
	[INFO] 10.244.0.5:50701 - 60925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136235s
	[INFO] 10.244.0.5:34160 - 49361 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079304s
	[INFO] 10.244.0.5:34160 - 47831 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000185735s
	[INFO] 10.244.0.5:46275 - 16771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008177s
	[INFO] 10.244.0.5:46275 - 49536 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108335s
	[INFO] 10.244.0.5:47968 - 20526 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.00008698s
	[INFO] 10.244.0.5:47968 - 10797 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000120657s
	[INFO] 10.244.0.5:37248 - 56533 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000080178s
	[INFO] 10.244.0.5:37248 - 45520 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000103163s
	[INFO] 10.244.0.5:39385 - 32664 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000082135s
	[INFO] 10.244.0.5:39385 - 56732 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000177006s
	[INFO] 10.244.0.5:37963 - 19331 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068935s
	[INFO] 10.244.0.5:37963 - 62598 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104055s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-12
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-12
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-12
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-12"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-12
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:56:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:52:53 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:52:53 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:52:53 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:52:53 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.128.15.239
	  Hostname:    ubuntu-20-agent-12
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                26e2d22b-def2-c216-b2a9-007020fa8ce7
	  Boot ID:                    83656df0-482a-417d-b7fc-90bc5fb37652
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-97lv7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-p5xcl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 csi-hostpathplugin-2flxk                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ubuntu-20-agent-12                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kube-apiserver-ubuntu-20-agent-12             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-12    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-275md                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ubuntu-20-agent-12             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 nvidia-device-plugin-daemonset-rmgc2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 snapshot-controller-56fcc65765-ncqwr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 snapshot-controller-56fcc65765-xp8jb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  local-path-storage          local-path-provisioner-86d989889c-r6cj8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-admission-7f54bd7598-rfghv            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-admission-init-gh7z4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-controllers-5ff7c5d4db-529t5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-scheduler-79dc4b78bb-zdd4g            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-j4j2x                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             298Mi (0%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 19m   kube-proxy       
	  Normal   Starting                 19m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m   node-controller  Node ubuntu-20-agent-12 event: Registered Node ubuntu-20-agent-12 in Controller
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 28 f8 d2 0a cd 08 06
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7e b8 fc 4c f3 9c 08 06
	[Sep23 10:36] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[ +10.128758] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 aa 9b fb 38 08 06
	[  +0.000410] IPv4: martian source 10.244.0.5 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[  +2.001125] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 02 27 ad 4b 0d 08 06
	[  +0.032532] IPv4: martian source 10.244.0.5 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e ed 25 59 75 f3 08 06
	[  +3.912883] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 62 ba d6 13 c3 e3 08 06
	[  +2.709643] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 66 31 90 37 c7 08 06
	[  +0.019221] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 1d 22 9e 8e 47 08 06
	[  +9.151781] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 ca ad 28 d8 56 08 06
	[  +0.348439] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 59 84 5e b0 7b 08 06
	[  +0.569834] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e c1 ff 28 29 42 08 06
	
	
	==> etcd [891452784bf9] <==
	{"level":"info","ts":"2024-09-23T10:36:28.599225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dd041fa4dc6d4aac elected leader dd041fa4dc6d4aac at term 2"}
	{"level":"info","ts":"2024-09-23T10:36:28.600162Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.600816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:28.600810Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dd041fa4dc6d4aac","local-member-attributes":"{Name:ubuntu-20-agent-12 ClientURLs:[https://10.128.15.239:2379]}","request-path":"/0/members/dd041fa4dc6d4aac/attributes","cluster-id":"c05a044d5786a1e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:36:28.600843Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:36:28.600903Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c05a044d5786a1e7","local-member-id":"dd041fa4dc6d4aac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.600975Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601004Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601891Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602013Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.239:2379"}
	{"level":"info","ts":"2024-09-23T10:36:28.603219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:36:44.242056Z","caller":"traceutil/trace.go:171","msg":"trace[1467056625] linearizableReadLoop","detail":"{readStateIndex:849; appliedIndex:845; }","duration":"128.026224ms","start":"2024-09-23T10:36:44.114013Z","end":"2024-09-23T10:36:44.242039Z","steps":["trace[1467056625] 'read index received'  (duration: 46.430648ms)","trace[1467056625] 'applied index is now lower than readState.Index'  (duration: 81.594963ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:36:44.242093Z","caller":"traceutil/trace.go:171","msg":"trace[2126161537] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"134.824059ms","start":"2024-09-23T10:36:44.107242Z","end":"2024-09-23T10:36:44.242066Z","steps":["trace[2126161537] 'process raft request'  (duration: 123.210784ms)","trace[2126161537] 'compare'  (duration: 11.439426ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:36:44.242290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.188403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:36:44.242444Z","caller":"traceutil/trace.go:171","msg":"trace[1472265816] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:832; }","duration":"128.418389ms","start":"2024-09-23T10:36:44.114009Z","end":"2024-09-23T10:36:44.242428Z","steps":["trace[1472265816] 'agreement among raft nodes before linearized reading'  (duration: 128.138624ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:36:44.242340Z","caller":"traceutil/trace.go:171","msg":"trace[1535126050] transaction","detail":"{read_only:false; response_revision:832; number_of_response:1; }","duration":"133.407624ms","start":"2024-09-23T10:36:44.108904Z","end":"2024-09-23T10:36:44.242312Z","steps":["trace[1535126050] 'process raft request'  (duration: 133.085569ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:46:28.621172Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1493}
	{"level":"info","ts":"2024-09-23T10:46:28.644160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1493,"took":"22.540162ms","hash":974073395,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":3624960,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T10:46:28.644213Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":974073395,"revision":1493,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T10:51:28.626660Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1885}
	{"level":"info","ts":"2024-09-23T10:51:28.643237Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1885,"took":"16.000986ms","hash":3586383635,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":3063808,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-09-23T10:51:28.643296Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3586383635,"revision":1885,"compact-revision":1493}
	
	
	==> kernel <==
	 10:56:14 up 1 day, 16:38,  0 users,  load average: 0.01, 0.13, 0.51
	Linux ubuntu-20-agent-12 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [98649c04ed19] <==
	W0923 10:51:47.629403       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:52:47.637493       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:52:47.637537       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:52:47.637493       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:52:47.637574       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:52:47.639030       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:52:47.639043       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:53:47.647600       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:53:47.647650       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:53:47.647600       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:53:47.647684       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:53:47.649260       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:53:47.649260       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:54:47.657617       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	W0923 10:54:47.657640       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:54:47.657666       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	E0923 10:54:47.657666       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:54:47.659203       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:54:47.659206       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:55:47.668253       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:55:47.668302       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:55:47.668253       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 10:55:47.668333       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 10:55:47.669966       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 10:55:47.669993       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	
	
	==> kube-controller-manager [e008cb9d44fc] <==
	I0923 10:53:12.174269       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 10:53:15.172452       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="70.174µs"
	I0923 10:53:21.171852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="71.446µs"
	I0923 10:53:24.171634       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	W0923 10:53:25.510028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:53:25.510073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:53:29.170858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="112.339µs"
	E0923 10:53:47.650002       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:53:47.650028       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:53:47.651208       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:53:47.651217       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	W0923 10:53:57.540001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:53:57.540059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:54:36.143871       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:54:36.143920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 10:54:47.659769       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:54:47.659769       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:54:47.660887       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:54:47.660899       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	W0923 10:55:30.887997       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:55:30.888046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 10:55:47.670672       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:55:47.670697       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:55:47.671902       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 10:55:47.671902       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-proxy [045fad5ce6ab] <==
	I0923 10:36:38.573406       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:36:38.729619       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.239"]
	E0923 10:36:38.729768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:36:38.818441       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:36:38.818516       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:36:38.825889       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:36:38.826286       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:36:38.826330       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:36:38.829447       1 config.go:328] "Starting node config controller"
	I0923 10:36:38.829476       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:36:38.830499       1 config.go:199] "Starting service config controller"
	I0923 10:36:38.830549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:36:38.830606       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:36:38.830612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:36:38.931771       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:36:38.931860       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:36:38.938436       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cefe11af8e63] <==
	W0923 10:36:30.422004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:36:30.422053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.448133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:36:30.448193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.597590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:36:30.597642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.627316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.627362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.638928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:36:30.638980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.639681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:36:30.639714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.656288       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:36:30.656331       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:36:30.673851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.673901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.732651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.732705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.750217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:36:30.750269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.788871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.788927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.793547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:36:30.793590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:36:32.724371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 10:56:14 UTC. --
	Sep 23 10:54:49 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:54:49.163795 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:54:51 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:54:51.164488 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:54:57 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:54:57.162532 1590014 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-97lv7" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:55:00 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:00.164677 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:55:00 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:00.164853 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:55:03 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:03.599705 1590014 secret.go:188] Couldn't get secret volcano-system/volcano-admission-secret: secret "volcano-admission-secret" not found
	Sep 23 10:55:03 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:03.599813 1590014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bd93063-1d57-4569-b1ce-3b0c16811d04-admission-certs podName:5bd93063-1d57-4569-b1ce-3b0c16811d04 nodeName:}" failed. No retries permitted until 2024-09-23 10:57:05.599793935 +0000 UTC m=+1233.531317729 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "admission-certs" (UniqueName: "kubernetes.io/secret/5bd93063-1d57-4569-b1ce-3b0c16811d04-admission-certs") pod "volcano-admission-7f54bd7598-rfghv" (UID: "5bd93063-1d57-4569-b1ce-3b0c16811d04") : secret "volcano-admission-secret" not found
	Sep 23 10:55:06 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:06.164442 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:55:12 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:12.167571 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:55:15 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:15.163774 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:55:21 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:55:21.162684 1590014 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-p5xcl" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:55:21 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:21.165146 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:55:24 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:24.164221 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:55:30 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:30.164344 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:55:33 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:33.164759 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:55:39 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:39.164595 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:55:44 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:44.164669 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:55:47 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:47.164445 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:55:50 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:50.163637 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:55:58 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:58.163987 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:55:58 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:55:58.163987 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:56:04 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:56:04.163892 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 10:56:11 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:56:11.163450 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 10:56:12 ubuntu-20-agent-12 kubelet[1590014]: E0923 10:56:12.164179 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 10:56:13 ubuntu-20-agent-12 kubelet[1590014]: I0923 10:56:13.161933 1590014 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-97lv7" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [a88800a1ce5b] <==
	I0923 10:36:38.418197       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:36:38.433696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:36:38.433749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:36:38.445674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:36:38.446763       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	I0923 10:36:38.449267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35a6bb7a-1e48-4bf9-816a-2d141c61bd81", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0 became leader
	I0923 10:36:38.547698       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1 (68.85754ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-rfghv" not found
	Error from server (NotFound): pods "volcano-admission-init-gh7z4" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-529t5" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-zdd4g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1
--- FAIL: TestAddons/parallel/CSI (371.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (481.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:329: TestAddons/parallel/Headlamp: WARNING: pod list for "headlamp" "app.kubernetes.io/name=headlamp" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:773: ***** TestAddons/parallel/Headlamp: pod "app.kubernetes.io/name=headlamp" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:773: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
addons_test.go:773: TestAddons/parallel/Headlamp: showing logs for failed pods as of 2024-09-23 11:04:15.40603593 +0000 UTC m=+1747.787608852
addons_test.go:774: failed waiting for headlamp pod: app.kubernetes.io/name=headlamp within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:42273               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:36 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:42 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:49 UTC | 23 Sep 24 10:49 UTC |
	|         | minikube                             |          |         |         |                     |                     |
	| addons  | minikube addons                      | minikube | jenkins | v1.34.0 | 23 Sep 24 10:50 UTC | 23 Sep 24 10:50 UTC |
	|         | disable metrics-server               |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	| addons  | enable headlamp -p minikube          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:56 UTC | 23 Sep 24 10:56 UTC |
	|         | --alsologtostderr -v=1               |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:36:19
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:36:19.158069 1588554 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:36:19.158231 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158241 1588554 out.go:358] Setting ErrFile to fd 2...
	I0923 10:36:19.158245 1588554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:36:19.158464 1588554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	I0923 10:36:19.159125 1588554 out.go:352] Setting JSON to false
	I0923 10:36:19.160039 1588554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":145130,"bootTime":1726942649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:36:19.160160 1588554 start.go:139] virtualization: kvm guest
	I0923 10:36:19.162394 1588554 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:36:19.163650 1588554 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:36:19.163676 1588554 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 10:36:19.163732 1588554 notify.go:220] Checking for updates...
	I0923 10:36:19.166389 1588554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:36:19.167804 1588554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:19.169081 1588554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 10:36:19.170968 1588554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:36:19.172507 1588554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:36:19.174424 1588554 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:36:19.185459 1588554 out.go:177] * Using the none driver based on user configuration
	I0923 10:36:19.186681 1588554 start.go:297] selected driver: none
	I0923 10:36:19.186694 1588554 start.go:901] validating driver "none" against <nil>
	I0923 10:36:19.186706 1588554 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:36:19.186759 1588554 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 10:36:19.187052 1588554 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0923 10:36:19.187561 1588554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:36:19.187804 1588554 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:19.187836 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:19.187883 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:19.187891 1588554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:36:19.187950 1588554 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:19.190491 1588554 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 10:36:19.192247 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.192296 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json: {Name:mk0db601d978f1f6b111e723fd0658218dee1a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:19.192505 1588554 start.go:360] acquireMachinesLock for minikube: {Name:mka47a0638fa8ca4d22f1fa46c51878d308fb6cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:36:19.192555 1588554 start.go:364] duration metric: took 26.854µs to acquireMachinesLock for "minikube"
	I0923 10:36:19.192576 1588554 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:19.192689 1588554 start.go:125] createHost starting for "" (driver="none")
	I0923 10:36:19.194985 1588554 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0923 10:36:19.196198 1588554 exec_runner.go:51] Run: systemctl --version
	I0923 10:36:19.198807 1588554 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0923 10:36:19.198844 1588554 client.go:168] LocalClient.Create starting
	I0923 10:36:19.198929 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem
	I0923 10:36:19.198967 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.198986 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199033 1588554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem
	I0923 10:36:19.199052 1588554 main.go:141] libmachine: Decoding PEM data...
	I0923 10:36:19.199065 1588554 main.go:141] libmachine: Parsing certificate...
	I0923 10:36:19.199430 1588554 client.go:171] duration metric: took 577.868µs to LocalClient.Create
	I0923 10:36:19.199455 1588554 start.go:167] duration metric: took 651.01µs to libmachine.API.Create "minikube"
	I0923 10:36:19.199461 1588554 start.go:293] postStartSetup for "minikube" (driver="none")
	I0923 10:36:19.199503 1588554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:36:19.199539 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:36:19.209126 1588554 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:36:19.209149 1588554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:36:19.209157 1588554 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:36:19.210966 1588554 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0923 10:36:19.212083 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/addons for local assets ...
	I0923 10:36:19.212135 1588554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-1577701/.minikube/files for local assets ...
	I0923 10:36:19.212155 1588554 start.go:296] duration metric: took 12.687054ms for postStartSetup
	I0923 10:36:19.212795 1588554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/config.json ...
	I0923 10:36:19.212933 1588554 start.go:128] duration metric: took 20.232501ms to createHost
	I0923 10:36:19.212946 1588554 start.go:83] releasing machines lock for "minikube", held for 20.378727ms
	I0923 10:36:19.213290 1588554 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:36:19.213405 1588554 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0923 10:36:19.215275 1588554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:36:19.215410 1588554 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:36:19.225131 1588554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 10:36:19.225172 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.225207 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.225324 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.246269 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:36:19.256037 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:36:19.265994 1588554 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:36:19.266081 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:36:19.276368 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.286490 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:36:19.297389 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:36:19.307066 1588554 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:36:19.316656 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:36:19.326288 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:36:19.336363 1588554 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:36:19.346290 1588554 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:36:19.355338 1588554 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:36:19.364071 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:19.577952 1588554 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0923 10:36:19.651036 1588554 start.go:495] detecting cgroup driver to use...
	I0923 10:36:19.651102 1588554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:36:19.651252 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:36:19.672247 1588554 exec_runner.go:51] Run: which cri-dockerd
	I0923 10:36:19.673216 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:36:19.681044 1588554 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0923 10:36:19.681067 1588554 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.681103 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.689425 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:36:19.689591 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4059772120 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:36:19.698668 1588554 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0923 10:36:19.932327 1588554 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0923 10:36:20.150083 1588554 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:36:20.150282 1588554 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0923 10:36:20.150300 1588554 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0923 10:36:20.150338 1588554 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0923 10:36:20.158569 1588554 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:36:20.158734 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2996454661 /etc/docker/daemon.json
	I0923 10:36:20.168354 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:20.379218 1588554 exec_runner.go:51] Run: sudo systemctl restart docker
	I0923 10:36:20.693080 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:36:20.705085 1588554 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0923 10:36:20.723552 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:20.735597 1588554 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:36:20.953725 1588554 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0923 10:36:21.177941 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.410173 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0923 10:36:21.423706 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:36:21.435794 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:21.688698 1588554 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0923 10:36:21.764452 1588554 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:36:21.764538 1588554 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0923 10:36:21.765977 1588554 start.go:563] Will wait 60s for crictl version
	I0923 10:36:21.766041 1588554 exec_runner.go:51] Run: which crictl
	I0923 10:36:21.767183 1588554 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0923 10:36:21.799990 1588554 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 10:36:21.800066 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.821449 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:21.845424 1588554 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 10:36:21.845506 1588554 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0923 10:36:21.848567 1588554 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0923 10:36:21.850015 1588554 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:36:21.850144 1588554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:36:21.850155 1588554 kubeadm.go:934] updating node { 10.128.15.239 8443 v1.31.1 docker true true} ...
	I0923 10:36:21.850253 1588554 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-12 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.239 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0923 10:36:21.850310 1588554 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0923 10:36:21.901691 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:21.901719 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:21.901730 1588554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:36:21.901755 1588554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.239 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-12 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:36:21.901910 1588554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-12"
	  kubeletExtraArgs:
	    node-ip: 10.128.15.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:36:21.901970 1588554 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.910706 1588554 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:36:21.910760 1588554 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:36:21.918867 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:36:21.918878 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:36:21.918874 1588554 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:36:21.918927 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:36:21.919007 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:21.931740 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:36:21.973404 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2218285672 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:36:21.975632 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube621796612 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:36:22.005095 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3553074774 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:36:22.078082 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:36:22.087582 1588554 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0923 10:36:22.087606 1588554 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.087647 1588554 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.095444 1588554 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0923 10:36:22.095602 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4110124182 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:36:22.105645 1588554 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0923 10:36:22.105666 1588554 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0923 10:36:22.105700 1588554 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0923 10:36:22.113822 1588554 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:36:22.114022 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3324119727 /lib/systemd/system/kubelet.service
	I0923 10:36:22.123427 1588554 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0923 10:36:22.123598 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3318915681 /var/tmp/minikube/kubeadm.yaml.new
	I0923 10:36:22.131907 1588554 exec_runner.go:51] Run: grep 10.128.15.239	control-plane.minikube.internal$ /etc/hosts
	I0923 10:36:22.133649 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:22.363463 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:22.378439 1588554 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube for IP: 10.128.15.239
	I0923 10:36:22.378459 1588554 certs.go:194] generating shared ca certs ...
	I0923 10:36:22.378479 1588554 certs.go:226] acquiring lock for ca certs: {Name:mk757d3be8cf2fb32b8856d4b5e3173183901a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.378637 1588554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key
	I0923 10:36:22.378678 1588554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key
	I0923 10:36:22.378687 1588554 certs.go:256] generating profile certs ...
	I0923 10:36:22.378744 1588554 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key
	I0923 10:36:22.378763 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt with IP's: []
	I0923 10:36:22.592011 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt ...
	I0923 10:36:22.592085 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.crt: {Name:mk1bdb710d99b77b32099c81dc261479f881a61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592249 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key ...
	I0923 10:36:22.592262 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/client.key: {Name:mk990e2a3a19cc03d4722edbfa635f5e467b2b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.592353 1588554 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83
	I0923 10:36:22.592371 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.239]
	I0923 10:36:22.826429 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 ...
	I0923 10:36:22.826468 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83: {Name:mkdaa76b99a75fc999a744f15c5aa0e73646ad27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826632 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 ...
	I0923 10:36:22.826650 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83: {Name:mk5c84f7ccec239df3b3f71560e288a437b89d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.826728 1588554 certs.go:381] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt
	I0923 10:36:22.826837 1588554 certs.go:385] copying /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key.ed77be83 -> /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key
	I0923 10:36:22.826896 1588554 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key
	I0923 10:36:22.826913 1588554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0923 10:36:22.988376 1588554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt ...
	I0923 10:36:22.988415 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt: {Name:mk1a79d5dbe06be337e3230425d1c5cb0b5c9c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988572 1588554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key ...
	I0923 10:36:22.988587 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key: {Name:mk7f2be748011aa06064cd625f3afbd5fec49aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:22.988800 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:36:22.988842 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:36:22.988874 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:36:22.988896 1588554 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-1577701/.minikube/certs/key.pem (1675 bytes)
	I0923 10:36:22.989638 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:36:22.989763 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube32048499 /var/lib/minikube/certs/ca.crt
	I0923 10:36:22.999482 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 10:36:22.999627 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2462737595 /var/lib/minikube/certs/ca.key
	I0923 10:36:23.008271 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:36:23.008403 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2315409218 /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:36:23.016619 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:36:23.016796 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2778680620 /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:36:23.026283 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0923 10:36:23.026429 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563673913 /var/lib/minikube/certs/apiserver.crt
	I0923 10:36:23.034367 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:36:23.034559 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1327376112 /var/lib/minikube/certs/apiserver.key
	I0923 10:36:23.043236 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:36:23.043385 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3861098534 /var/lib/minikube/certs/proxy-client.crt
	I0923 10:36:23.053261 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:36:23.053393 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1865989171 /var/lib/minikube/certs/proxy-client.key
	I0923 10:36:23.062749 1588554 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0923 10:36:23.062771 1588554 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.062810 1588554 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.070407 1588554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-1577701/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:36:23.070572 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2921020744 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.078922 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:36:23.079082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1931847277 /var/lib/minikube/kubeconfig
	I0923 10:36:23.087191 1588554 exec_runner.go:51] Run: openssl version
	I0923 10:36:23.090067 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:36:23.098811 1588554 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100243 1588554 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.100280 1588554 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:36:23.103237 1588554 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:36:23.112696 1588554 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:36:23.113952 1588554 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:36:23.113993 1588554 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:36:23.114121 1588554 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:36:23.130863 1588554 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:36:23.141170 1588554 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:36:23.154896 1588554 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:36:23.177871 1588554 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:36:23.186183 1588554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:36:23.186207 1588554 kubeadm.go:157] found existing configuration files:
	
	I0923 10:36:23.186251 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:36:23.195211 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:36:23.195272 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:36:23.203608 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:36:23.212052 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:36:23.212118 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:36:23.220697 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.231762 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:36:23.231826 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:36:23.239886 1588554 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:36:23.250151 1588554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:36:23.250215 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:36:23.257852 1588554 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:36:23.292982 1588554 kubeadm.go:310] W0923 10:36:23.292852 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.293485 1588554 kubeadm.go:310] W0923 10:36:23.293445 1589455 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:36:23.295381 1588554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:36:23.295429 1588554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:36:23.388509 1588554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:36:23.388613 1588554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:36:23.388622 1588554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:36:23.388626 1588554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:36:23.400110 1588554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:36:23.403660 1588554 out.go:235]   - Generating certificates and keys ...
	I0923 10:36:23.403706 1588554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:36:23.403719 1588554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:36:23.479635 1588554 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:36:23.612116 1588554 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:36:23.692069 1588554 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:36:23.926999 1588554 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:36:24.011480 1588554 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:36:24.011600 1588554 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.104614 1588554 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:36:24.104769 1588554 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 10:36:24.304540 1588554 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:36:24.538700 1588554 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:36:24.615897 1588554 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:36:24.616110 1588554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:36:24.791653 1588554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:36:24.910277 1588554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:36:25.215908 1588554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:36:25.289127 1588554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:36:25.490254 1588554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:36:25.490804 1588554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:36:25.493193 1588554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:36:25.495266 1588554 out.go:235]   - Booting up control plane ...
	I0923 10:36:25.495299 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:36:25.495318 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:36:25.495739 1588554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:36:25.515279 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:36:25.519949 1588554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:36:25.519979 1588554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:36:25.765044 1588554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:36:25.765080 1588554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:36:26.266756 1588554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690653ms
	I0923 10:36:26.266797 1588554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:36:31.268595 1588554 kubeadm.go:310] [api-check] The API server is healthy after 5.001820679s
	I0923 10:36:31.279620 1588554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:36:31.290992 1588554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:36:31.308130 1588554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:36:31.308158 1588554 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-12 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:36:31.315634 1588554 kubeadm.go:310] [bootstrap-token] Using token: vj37sq.3v8d1kp1945z41wj
	I0923 10:36:31.316963 1588554 out.go:235]   - Configuring RBAC rules ...
	I0923 10:36:31.317008 1588554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:36:31.320391 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:36:31.328142 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:36:31.330741 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:36:31.333381 1588554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:36:31.335890 1588554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:36:31.675856 1588554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:36:32.106847 1588554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:36:32.674219 1588554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:36:32.675126 1588554 kubeadm.go:310] 
	I0923 10:36:32.675137 1588554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:36:32.675141 1588554 kubeadm.go:310] 
	I0923 10:36:32.675148 1588554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:36:32.675152 1588554 kubeadm.go:310] 
	I0923 10:36:32.675156 1588554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:36:32.675160 1588554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:36:32.675164 1588554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:36:32.675171 1588554 kubeadm.go:310] 
	I0923 10:36:32.675175 1588554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:36:32.675179 1588554 kubeadm.go:310] 
	I0923 10:36:32.675184 1588554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:36:32.675188 1588554 kubeadm.go:310] 
	I0923 10:36:32.675192 1588554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:36:32.675196 1588554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:36:32.675207 1588554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:36:32.675211 1588554 kubeadm.go:310] 
	I0923 10:36:32.675217 1588554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:36:32.675221 1588554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:36:32.675225 1588554 kubeadm.go:310] 
	I0923 10:36:32.675228 1588554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675233 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c \
	I0923 10:36:32.675237 1588554 kubeadm.go:310] 	--control-plane 
	I0923 10:36:32.675242 1588554 kubeadm.go:310] 
	I0923 10:36:32.675246 1588554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:36:32.675252 1588554 kubeadm.go:310] 
	I0923 10:36:32.675255 1588554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vj37sq.3v8d1kp1945z41wj \
	I0923 10:36:32.675258 1588554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91a09f8ec29205faf582a48ccf10beda52dc431d394b0dc26a537d8edbd2b49c 
	I0923 10:36:32.679087 1588554 cni.go:84] Creating CNI manager for ""
	I0923 10:36:32.679120 1588554 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:36:32.680982 1588554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:36:32.682253 1588554 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:36:32.692879 1588554 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:36:32.693059 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3098274276 /etc/cni/net.d/1-k8s.conflist
	I0923 10:36:32.704393 1588554 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:36:32.704473 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:32.704510 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-12 minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0923 10:36:32.713564 1588554 ops.go:34] apiserver oom_adj: -16
	I0923 10:36:32.777699 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.277929 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:33.778034 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.278552 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:34.777937 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.278677 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:35.777756 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.278547 1588554 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:36:36.343720 1588554 kubeadm.go:1113] duration metric: took 3.63930993s to wait for elevateKubeSystemPrivileges
	I0923 10:36:36.343761 1588554 kubeadm.go:394] duration metric: took 13.229771538s to StartCluster
	I0923 10:36:36.343783 1588554 settings.go:142] acquiring lock: {Name:mkf413d2c932a8f45f91708eee4886fc43a35e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.343846 1588554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:36:36.344451 1588554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-1577701/kubeconfig: {Name:mk42cd91ee317759dd4ab26721004c644d4d46c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:36:36.344664 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:36:36.344755 1588554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:36:36.344891 1588554 addons.go:69] Setting yakd=true in profile "minikube"
	I0923 10:36:36.344910 1588554 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0923 10:36:36.344913 1588554 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0923 10:36:36.344939 1588554 addons.go:69] Setting registry=true in profile "minikube"
	I0923 10:36:36.344931 1588554 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0923 10:36:36.344946 1588554 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0923 10:36:36.344964 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0923 10:36:36.344976 1588554 addons.go:234] Setting addon registry=true in "minikube"
	I0923 10:36:36.344980 1588554 mustload.go:65] Loading cluster: minikube
	I0923 10:36:36.344979 1588554 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0923 10:36:36.344992 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.344990 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345000 1588554 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0923 10:36:36.345005 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345031 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345045 1588554 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0923 10:36:36.345072 1588554 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0923 10:36:36.345087 1588554 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0923 10:36:36.345088 1588554 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:36.345104 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345114 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345179 1588554 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:36:36.345317 1588554 addons.go:69] Setting volcano=true in profile "minikube"
	I0923 10:36:36.345335 1588554 addons.go:234] Setting addon volcano=true in "minikube"
	I0923 10:36:36.345361 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345658 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345675 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345680 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345690 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345758 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345762 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345775 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345780 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345807 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.345824 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345827 1588554 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0923 10:36:36.345827 1588554 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0923 10:36:36.344919 1588554 addons.go:234] Setting addon yakd=true in "minikube"
	I0923 10:36:36.345839 1588554 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0923 10:36:36.345843 1588554 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0923 10:36:36.344930 1588554 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0923 10:36:36.345858 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345860 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345874 1588554 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0923 10:36:36.345918 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.345811 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346177 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346191 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346221 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346328 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346342 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346371 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346524 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346536 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346550 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.345861 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.346579 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346655 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.346673 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.346705 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345810 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.345717 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.346539 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347192 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.347221 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.347233 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347253 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.347284 1588554 out.go:177] * Configuring local host environment ...
	I0923 10:36:36.345829 1588554 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0923 10:36:36.347650 1588554 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0923 10:36:36.348407 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.348430 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.348463 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0923 10:36:36.348690 1588554 out.go:270] * 
	W0923 10:36:36.348780 1588554 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0923 10:36:36.348809 1588554 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0923 10:36:36.348865 1588554 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0923 10:36:36.348897 1588554 out.go:270] * 
	W0923 10:36:36.348999 1588554 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0923 10:36:36.349040 1588554 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0923 10:36:36.349080 1588554 out.go:270] * 
	W0923 10:36:36.349130 1588554 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0923 10:36:36.349173 1588554 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0923 10:36:36.349199 1588554 out.go:270] * 
	W0923 10:36:36.349236 1588554 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0923 10:36:36.349282 1588554 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:36:36.345810 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.350050 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.350088 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.350710 1588554 out.go:177] * Verifying Kubernetes components...
	I0923 10:36:36.352239 1588554 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:36:36.369581 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.369720 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.370463 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.371382 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.373298 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.379392 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.383028 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385097 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.385628 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.385693 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.389742 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.389782 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.389793 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402210 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402285 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402325 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402356 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402407 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402488 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.402530 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.402557 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.402328 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.406952 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.406987 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.407339 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.407394 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.414599 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.414632 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.415393 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415455 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.415667 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.415722 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.417736 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.417799 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.420551 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.420602 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.421969 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.421994 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.422984 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423319 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.423344 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.424659 1588554 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:36:36.424874 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.424899 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.428268 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.428559 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.430071 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:36:36.430076 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:36:36.430207 1588554 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:36:36.431382 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431427 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:36:36.431585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube840197264 /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:36:36.431790 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.431815 1588554 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:36:36.431987 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1728725482 /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:36:36.433518 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:36:36.434702 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:36:36.435367 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.435397 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.436902 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:36:36.438150 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:36:36.439277 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.439337 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.440540 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.440996 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:36:36.442010 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:36:36.442071 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.442098 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.442561 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.442772 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443079 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.443136 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.443350 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.443375 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.443492 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443518 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0923 10:36:36.443525 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443566 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.443844 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.443879 1588554 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:36:36.444008 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4228746672 /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:36:36.444580 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:36:36.446035 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:36:36.446930 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.446950 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.447416 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.448168 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448190 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.448643 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.448661 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.449758 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 10:36:36.449765 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.449802 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:36:36.449942 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4141716628 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:36:36.452784 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.452686 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 10:36:36.454911 1588554 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:36:36.454973 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.455634 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.456554 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.457231 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.457268 1588554 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:36:36.457428 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1881343942 /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:36:36.458064 1588554 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458100 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:36:36.458238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2629288326 /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.458427 1588554 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:36:36.458490 1588554 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 10:36:36.458554 1588554 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:36:36.458748 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.459583 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.459224 1588554 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0923 10:36:36.459875 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.459904 1588554 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.459934 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:36:36.460073 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172599530 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.460516 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.460548 1588554 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:36:36.460695 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1059056177 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:36:36.462006 1588554 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.462043 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 10:36:36.462614 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3721652212 /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.464913 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.464936 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.464972 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.467000 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.472480 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:36:36.473238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube726889991 /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.480760 1588554 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0923 10:36:36.480939 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:36.485106 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:36.485141 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:36.485190 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:36.487844 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.487878 1588554 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:36:36.488012 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3601575597 /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:36:36.489111 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:36:36.491189 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.491220 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:36:36.491369 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3194307137 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:36:36.492639 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.492667 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494194 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.494218 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.494867 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:36:36.498982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:36:36.499389 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.500765 1588554 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.500800 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:36:36.500956 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1985750997 /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.501929 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.503522 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.507731 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.507981 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:36:36.508221 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2102644874 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:36:36.508499 1588554 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:36:36.508667 1588554 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:36:36.509791 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.509885 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:36:36.510186 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.510211 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:36:36.510259 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510276 1588554 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:36:36.510535 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2223790766 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:36:36.510687 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2284125594 /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:36:36.511165 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1172030255 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:36:36.518843 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.518932 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.519210 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.519243 1588554 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:36:36.519417 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2081246003 /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:36:36.527052 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:36.530307 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:36:36.531182 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:36:36.531199 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.531224 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:36:36.531366 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2359416048 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:36:36.534852 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.534897 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.534862 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.534931 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.534930 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:36:36.534953 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:36:36.535115 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube169766603 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:36:36.535148 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube873661914 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:36:36.540683 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.547811 1588554 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:36:36.548029 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.548063 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:36:36.548238 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube411864712 /etc/kubernetes/addons/ig-role.yaml
	I0923 10:36:36.553057 1588554 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:36:36.555188 1588554 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:36:36.555273 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.555312 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:36:36.555486 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4206261347 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:36:36.562063 1588554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.562124 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:36:36.562318 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918834683 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.563155 1588554 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.563195 1588554 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:36:36.563361 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2570607285 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:36:36.568213 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.568257 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:36:36.568398 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393911802 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:36:36.571999 1588554 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.572033 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:36:36.572185 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2353575520 /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.577466 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:36.577543 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:36.587661 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:36:36.598560 1588554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.598607 1588554 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:36:36.598954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2771751730 /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.603217 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.603313 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:36:36.603600 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4069496750 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:36:36.604133 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604165 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:36:36.604308 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1964334193 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:36:36.604545 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.604574 1588554 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:36:36.604700 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2583663156 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:36:36.610522 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:36.610602 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:36.615633 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:36:36.616448 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:36.616504 1588554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616528 1588554 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0923 10:36:36.616540 1588554 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.616587 1588554 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.633448 1588554 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.633487 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:36:36.633636 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3570026092 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.637790 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.637820 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:36:36.637954 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2992782773 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:36:36.646982 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:36:36.677372 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:36.679555 1588554 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:36:36.679857 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4202431507 /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.688839 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.688874 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:36:36.689001 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube389006966 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:36:36.693416 1588554 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.693456 1588554 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:36:36.693585 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2951849839 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:36:36.738946 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:36:36.774333 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.774371 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:36:36.774529 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1226040952 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:36:36.785891 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.785936 1588554 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:36:36.786131 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1330733841 /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:36:36.796363 1588554 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:36:36.807897 1588554 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.807939 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:36:36.808082 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube111334727 /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.814837 1588554 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818242 1588554 node_ready.go:49] node "ubuntu-20-agent-12" has status "Ready":"True"
	I0923 10:36:36.818281 1588554 node_ready.go:38] duration metric: took 3.403871ms for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 10:36:36.818293 1588554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:36.823705 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:36:36.828322 1588554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:36.832595 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.832627 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:36:36.832974 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1712125769 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:36:36.870153 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.870197 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:36:36.870386 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2973576979 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:36:36.926104 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.926143 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:36:36.926289 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2280122930 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:36:36.938896 1588554 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.938934 1588554 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:36:36.939070 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1690561903 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:36.950670 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:36:37.100928 1588554 addons.go:475] Verifying addon registry=true in "minikube"
	I0923 10:36:37.102814 1588554 out.go:177] * Verifying registry addon...
	I0923 10:36:37.112453 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:36:37.120259 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.187559 1588554 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0923 10:36:37.634285 1588554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:36:37.634317 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:37.695664 1588554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0923 10:36:37.724258 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.07719175s)
	I0923 10:36:37.724301 1588554 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0923 10:36:37.739850 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.124159231s)
	I0923 10:36:37.742561 1588554 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0923 10:36:37.849519 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.025767323s)
	I0923 10:36:38.120128 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.376349 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.69890606s)
	W0923 10:36:38.376406 1588554 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.376435 1588554 retry.go:31] will retry after 154.227647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:36:38.532717 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:36:38.617615 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:38.835917 1588554 pod_ready.go:103] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:39.116010 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.531492 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.580742626s)
	I0923 10:36:39.531534 1588554 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0923 10:36:39.537060 1588554 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:36:39.539558 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:36:39.547478 1588554 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:36:39.547508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:39.616393 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:39.677521 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.146291745s)
	I0923 10:36:40.048496 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.116802 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.545476 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:40.617107 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:40.834321 1588554 pod_ready.go:93] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:40.834347 1588554 pod_ready.go:82] duration metric: took 4.005994703s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:40.834359 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:41.044378 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.144560 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:41.351204 1588554 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.818400841s)
	I0923 10:36:41.545380 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:41.616429 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.044309 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.116963 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.545513 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:42.616637 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:42.841366 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:43.045300 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.116762 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:43.431875 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:36:43.432127 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3269004284 /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.445163 1588554 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:36:43.445319 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3403460145 /var/lib/minikube/google_cloud_project
	I0923 10:36:43.457431 1588554 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0923 10:36:43.457499 1588554 host.go:66] Checking if "minikube" exists ...
	I0923 10:36:43.458127 1588554 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 10:36:43.458149 1588554 api_server.go:166] Checking apiserver status ...
	I0923 10:36:43.458181 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:43.479053 1588554 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1589857/cgroup
	I0923 10:36:43.491340 1588554 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d"
	I0923 10:36:43.491424 1588554 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/98649c04ed1910d099a35a8d07ec3f115f585a596ebab50aa9eb33aff375843d/freezer.state
	I0923 10:36:43.503388 1588554 api_server.go:204] freezer state: "THAWED"
	I0923 10:36:43.503426 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:43.508517 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:43.508577 1588554 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:36:43.511610 1588554 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:36:43.513346 1588554 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:36:43.514725 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.514758 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:36:43.514881 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube616037526 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:36:43.525139 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.525184 1588554 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:36:43.525334 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3406397122 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:36:43.536623 1588554 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.536656 1588554 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:36:43.536845 1588554 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3654027324 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.544627 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:43.548001 1588554 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:36:43.616664 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.106662 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.245172 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:44.462186 1588554 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0923 10:36:44.463828 1588554 out.go:177] * Verifying gcp-auth addon...
	I0923 10:36:44.466561 1588554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:36:44.469735 1588554 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:36:44.571760 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:44.616121 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.045508 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.116582 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:45.342074 1588554 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"False"
	I0923 10:36:45.544902 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:45.617645 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.044759 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.117793 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.546485 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:46.616891 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:46.840864 1588554 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.840888 1588554 pod_ready.go:82] duration metric: took 6.006520139s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.840899 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846458 1588554 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.846487 1588554 pod_ready.go:82] duration metric: took 5.579842ms for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.846499 1588554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.850991 1588554 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 10:36:46.851013 1588554 pod_ready.go:82] duration metric: took 4.506621ms for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 10:36:46.851020 1588554 pod_ready.go:39] duration metric: took 10.032714922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:36:46.851040 1588554 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:36:46.851099 1588554 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:36:46.875129 1588554 api_server.go:72] duration metric: took 10.525769516s to wait for apiserver process to appear ...
	I0923 10:36:46.875164 1588554 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:36:46.875191 1588554 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 10:36:46.879815 1588554 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 10:36:46.880904 1588554 api_server.go:141] control plane version: v1.31.1
	I0923 10:36:46.880933 1588554 api_server.go:131] duration metric: took 5.761723ms to wait for apiserver health ...
	I0923 10:36:46.880944 1588554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:36:46.889660 1588554 system_pods.go:59] 16 kube-system pods found
	I0923 10:36:46.889699 1588554 system_pods.go:61] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.889712 1588554 system_pods.go:61] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.889722 1588554 system_pods.go:61] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.889739 1588554 system_pods.go:61] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.889746 1588554 system_pods.go:61] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.889752 1588554 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.889759 1588554 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.889765 1588554 system_pods.go:61] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.889770 1588554 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.889777 1588554 system_pods.go:61] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.889783 1588554 system_pods.go:61] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.889793 1588554 system_pods.go:61] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.889800 1588554 system_pods.go:61] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.889810 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889821 1588554 system_pods.go:61] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.889826 1588554 system_pods.go:61] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.889835 1588554 system_pods.go:74] duration metric: took 8.88361ms to wait for pod list to return data ...
	I0923 10:36:46.889844 1588554 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:36:46.892857 1588554 default_sa.go:45] found service account: "default"
	I0923 10:36:46.892882 1588554 default_sa.go:55] duration metric: took 3.031168ms for default service account to be created ...
	I0923 10:36:46.892893 1588554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:36:46.901634 1588554 system_pods.go:86] 16 kube-system pods found
	I0923 10:36:46.901674 1588554 system_pods.go:89] "coredns-7c65d6cfc9-p5xcl" [f5f9a7c8-fde0-47d4-ad0d-64ad04053a9c] Running
	I0923 10:36:46.901688 1588554 system_pods.go:89] "csi-hostpath-attacher-0" [3359d397-e4ff-42f7-a50a-d3f528d35993] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:36:46.901699 1588554 system_pods.go:89] "csi-hostpath-resizer-0" [9c4d8c86-795e-4ef6-a3ee-092372993d50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:36:46.901714 1588554 system_pods.go:89] "csi-hostpathplugin-2flxk" [1fd9aa09-39b0-440c-a97d-578bbad40f74] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:36:46.901725 1588554 system_pods.go:89] "etcd-ubuntu-20-agent-12" [a5459b2e-0d67-4c43-9e0d-f680efb64d3f] Running
	I0923 10:36:46.901732 1588554 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-12" [1a730626-aab7-4d08-b75b-523608e16b08] Running
	I0923 10:36:46.901741 1588554 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-12" [e67abe58-a228-4b5d-a487-1afe60ef2341] Running
	I0923 10:36:46.901747 1588554 system_pods.go:89] "kube-proxy-275md" [5201ac4e-6f2a-4040-ba5b-de3260351ceb] Running
	I0923 10:36:46.901753 1588554 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-12" [a148d437-fa1a-470b-a96d-ac0bd83228cd] Running
	I0923 10:36:46.901767 1588554 system_pods.go:89] "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:36:46.901776 1588554 system_pods.go:89] "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
	I0923 10:36:46.901784 1588554 system_pods.go:89] "registry-66c9cd494c-xghlh" [3805a0ce-c102-4a58-92fb-1845d803f30a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:36:46.901790 1588554 system_pods.go:89] "registry-proxy-j2dg7" [04db77a5-6d0f-40b1-b220-f94a39762520] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:36:46.901801 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-ncqwr" [9e2acf06-ed7b-441d-95cd-2bf1bcde1ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901810 1588554 system_pods.go:89] "snapshot-controller-56fcc65765-xp8jb" [420b2463-f719-45de-a16b-01add2f57250] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:36:46.901814 1588554 system_pods.go:89] "storage-provisioner" [609264e3-b351-446c-bb44-88cf8a4fbfca] Running
	I0923 10:36:46.901824 1588554 system_pods.go:126] duration metric: took 8.925234ms to wait for k8s-apps to be running ...
	I0923 10:36:46.901834 1588554 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:36:46.901887 1588554 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:36:46.916755 1588554 system_svc.go:56] duration metric: took 14.881074ms WaitForService to wait for kubelet
	I0923 10:36:46.916789 1588554 kubeadm.go:582] duration metric: took 10.567438885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:36:46.916809 1588554 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:36:46.920579 1588554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:36:46.920616 1588554 node_conditions.go:123] node cpu capacity is 8
	I0923 10:36:46.920632 1588554 node_conditions.go:105] duration metric: took 3.817539ms to run NodePressure ...
	I0923 10:36:46.920648 1588554 start.go:241] waiting for startup goroutines ...
	I0923 10:36:47.045158 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.117155 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:47.572416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:47.616622 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.045426 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.116767 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:48.573214 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:48.616845 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.044221 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.117209 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:49.543831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:49.615831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.044752 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.117047 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:50.572160 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:50.617157 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:36:51.045029 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:51.116892 1588554 kapi.go:107] duration metric: took 14.004458573s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:36:51.571831 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.044681 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:52.544488 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.071964 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:53.544286 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.044362 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:54.572181 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.073837 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:55.544285 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.044544 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:56.545079 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.044265 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:57.544710 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.074493 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:58.544754 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.044416 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:36:59.545731 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.044364 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:00.545006 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.043696 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:01.544143 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.044850 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:02.544007 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.073713 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:03.544432 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.044116 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:04.544249 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.084663 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:05.545630 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.073711 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:06.545674 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.074336 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:07.573379 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.072260 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:08.573326 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.046665 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:09.572302 1588554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:37:10.044323 1588554 kapi.go:107] duration metric: took 30.504755495s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:42:44.467839 1588554 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 10:42:44.467877 1588554 kapi.go:107] duration metric: took 6m0.001323817s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 10:42:44.467989 1588554 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 10:42:44.469896 1588554 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver
	I0923 10:42:44.471562 1588554 addons.go:510] duration metric: took 6m8.126806783s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver]
	I0923 10:42:44.471618 1588554 start.go:246] waiting for cluster config update ...
	I0923 10:42:44.471643 1588554 start.go:255] writing updated cluster config ...
	I0923 10:42:44.471977 1588554 exec_runner.go:51] Run: rm -f paused
	I0923 10:42:44.523125 1588554 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:42:44.524945 1588554 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 11:04:15 UTC. --
	Sep 23 10:48:04 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:48:04Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540680915Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.540684219Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.542670843Z" level=error msg="Error running exec 5fd2d79e980950ca565c3a912c8440ea08719c5a16c1780c5869c00f977ccd0f in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=5608c228de976ea9 traceID=04969482329070952bf3db909444f8ca
	Sep 23 10:48:05 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:48:05.744401240Z" level=info msg="ignoring event" container=3827f0f3d5112d058f27d4c9b88f316e39b83b35f1895269e7248cf49f214165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.739922067Z" level=info msg="ignoring event" container=cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.812004030Z" level=info msg="ignoring event" container=9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.882744210Z" level=info msg="ignoring event" container=b877c8259724a59128251b16cfbdf29c388b2ab853f4a4a08190f60af4e3434d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:45 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:45.988558076Z" level=info msg="ignoring event" container=d6ea241113e500cf3b405d989c416e01c0bc41267ce5bffed361a01c11edbd21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="error getting RW layer size for container ID 'cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77': Error response from daemon: No such container: cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cc089ff43590825456ab7fcdbf83739a202952dd1d95cbb9ffd4fd7186b85e77'"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="error getting RW layer size for container ID '9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd': Error response from daemon: No such container: 9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd"
	Sep 23 10:49:52 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:49:52Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9740e1ab45dffcba4eaa96160ed6e0a5385ee27e147bb376ac61e7e743929bfd'"
	Sep 23 10:49:52 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:49:52.396588609Z" level=info msg="ignoring event" container=f44622d46ba2ff4fa5093d028c0d993d004a691db3525cf78779461bd1b6a21f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:50:04 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:50:04.012648513Z" level=info msg="ignoring event" container=7df30468750a3330ba5db4cc23ff317ad04892789778ac43bcf58194a92677f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:50:04 ubuntu-20-agent-12 dockerd[1588786]: time="2024-09-23T10:50:04.142232385Z" level=info msg="ignoring event" container=26d7d65f4a1100216cd9a8d9613b9d25ba9e84b925315943e951ae668a77c600 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:52:56 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:52:56Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:53:01 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:53:01Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:53:01 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:53:01Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 10:58:03 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:58:03Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 10:58:08 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:58:08Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 10:58:15 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T10:58:15Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 11:03:16 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T11:03:16Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: Status: Image is up to date for volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 11:03:17 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T11:03:17Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: Status: Image is up to date for volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 11:03:21 ubuntu-20-agent-12 cri-dockerd[1589115]: time="2024-09-23T11:03:21Z" level=info msg="Stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	1c0aec03476e1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          27 minutes ago      Running             csi-snapshotter                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	f22e4f1571647       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          27 minutes ago      Running             csi-provisioner                          0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	b43acbe9c46ae       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            27 minutes ago      Running             liveness-probe                           0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	80af8a926afc3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           27 minutes ago      Running             hostpath                                 0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	6f57e7ad00a9e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                27 minutes ago      Running             node-driver-registrar                    0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	369c356333963       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              27 minutes ago      Running             csi-resizer                              0                   83f21cc9148ed       csi-hostpath-resizer-0
	764a5f36015a2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   27 minutes ago      Running             csi-external-health-monitor-controller   0                   1e20aed46aae9       csi-hostpathplugin-2flxk
	5e03ecec68932       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             27 minutes ago      Running             csi-attacher                             0                   04bee9af65b88       csi-hostpath-attacher-0
	2a9c9054db024       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      27 minutes ago      Running             volume-snapshot-controller               0                   954881763f4d2       snapshot-controller-56fcc65765-xp8jb
	5189bf51dfe60       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      27 minutes ago      Running             volume-snapshot-controller               0                   3a5a27bdb1e27       snapshot-controller-56fcc65765-ncqwr
	100fd02a1faf5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        27 minutes ago      Running             yakd                                     0                   aad214bb107e1       yakd-dashboard-67d98fc6b-j4j2x
	e6929e7afa035       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               27 minutes ago      Running             cloud-spanner-emulator                   0                   45d7b20be1819       cloud-spanner-emulator-5b584cc74-97lv7
	88b34955ceb18       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       27 minutes ago      Running             local-path-provisioner                   0                   34f59459d9996       local-path-provisioner-86d989889c-r6cj8
	71c8aef5c5c24       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     27 minutes ago      Running             nvidia-device-plugin-ctr                 0                   2b86e9d29eb33       nvidia-device-plugin-daemonset-rmgc2
	c98c33bab4e43       c69fa2e9cbf5f                                                                                                                                27 minutes ago      Running             coredns                                  0                   f681430aabf24       coredns-7c65d6cfc9-p5xcl
	045fad5ce6ab4       60c005f310ff3                                                                                                                                27 minutes ago      Running             kube-proxy                               0                   6e8a6bce97790       kube-proxy-275md
	a88800a1ce5b9       6e38f40d628db                                                                                                                                27 minutes ago      Running             storage-provisioner                      0                   e04842fad72fa       storage-provisioner
	e008cb9d44fcb       175ffd71cce3d                                                                                                                                27 minutes ago      Running             kube-controller-manager                  0                   2f63f87bd15d1       kube-controller-manager-ubuntu-20-agent-12
	cefe11af8e634       9aa1fad941575                                                                                                                                27 minutes ago      Running             kube-scheduler                           0                   3f8185d06efd3       kube-scheduler-ubuntu-20-agent-12
	98649c04ed191       6bab7719df100                                                                                                                                27 minutes ago      Running             kube-apiserver                           0                   60b7c561b6237       kube-apiserver-ubuntu-20-agent-12
	891452784bf9b       2e96e5913fc06                                                                                                                                27 minutes ago      Running             etcd                                     0                   087dc8c7c97f8       etcd-ubuntu-20-agent-12
	
	
	==> coredns [c98c33bab4e4] <==
	[INFO] 10.244.0.5:39130 - 49408 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011371s
	[INFO] 10.244.0.5:36683 - 40984 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092092s
	[INFO] 10.244.0.5:36683 - 54814 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000177141s
	[INFO] 10.244.0.5:48486 - 28442 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000086929s
	[INFO] 10.244.0.5:48486 - 5406 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000127637s
	[INFO] 10.244.0.5:59402 - 60382 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000079785s
	[INFO] 10.244.0.5:59402 - 6106 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000100251s
	[INFO] 10.244.0.5:56367 - 45414 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00007586s
	[INFO] 10.244.0.5:56367 - 44632 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000107663s
	[INFO] 10.244.0.5:56779 - 21145 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071153s
	[INFO] 10.244.0.5:56779 - 17307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139638s
	[INFO] 10.244.0.5:50701 - 22008 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010586s
	[INFO] 10.244.0.5:50701 - 60925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136235s
	[INFO] 10.244.0.5:34160 - 49361 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079304s
	[INFO] 10.244.0.5:34160 - 47831 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000185735s
	[INFO] 10.244.0.5:46275 - 16771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008177s
	[INFO] 10.244.0.5:46275 - 49536 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108335s
	[INFO] 10.244.0.5:47968 - 20526 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.00008698s
	[INFO] 10.244.0.5:47968 - 10797 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000120657s
	[INFO] 10.244.0.5:37248 - 56533 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000080178s
	[INFO] 10.244.0.5:37248 - 45520 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000103163s
	[INFO] 10.244.0.5:39385 - 32664 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000082135s
	[INFO] 10.244.0.5:39385 - 56732 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000177006s
	[INFO] 10.244.0.5:37963 - 19331 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068935s
	[INFO] 10.244.0.5:37963 - 62598 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104055s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-12
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-12
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_36_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-12
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-12"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-12
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:04:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:03:04 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:03:04 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:03:04 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:03:04 +0000   Mon, 23 Sep 2024 10:36:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.128.15.239
	  Hostname:    ubuntu-20-agent-12
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                26e2d22b-def2-c216-b2a9-007020fa8ce7
	  Boot ID:                    83656df0-482a-417d-b7fc-90bc5fb37652
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-97lv7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-7c65d6cfc9-p5xcl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 csi-hostpathplugin-2flxk                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 etcd-ubuntu-20-agent-12                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27m
	  kube-system                 kube-apiserver-ubuntu-20-agent-12             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-12    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-275md                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ubuntu-20-agent-12             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 nvidia-device-plugin-daemonset-rmgc2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 snapshot-controller-56fcc65765-ncqwr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 snapshot-controller-56fcc65765-xp8jb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  local-path-storage          local-path-provisioner-86d989889c-r6cj8       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-admission-7f54bd7598-rfghv            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-admission-init-gh7z4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-controllers-5ff7c5d4db-529t5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-scheduler-79dc4b78bb-zdd4g            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-j4j2x                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             298Mi (0%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 27m   kube-proxy       
	  Normal   Starting                 27m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 27m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27m   node-controller  Node ubuntu-20-agent-12 event: Registered Node ubuntu-20-agent-12 in Controller
	
	
	==> dmesg <==
	[  +0.000004] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ce 28 f8 d2 0a cd 08 06
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 7e b8 fc 4c f3 9c 08 06
	[Sep23 10:36] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[ +10.128758] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 aa 9b fb 38 08 06
	[  +0.000410] IPv4: martian source 10.244.0.5 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 6e 58 88 a9 4c 08 06
	[  +2.001125] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 02 27 ad 4b 0d 08 06
	[  +0.032532] IPv4: martian source 10.244.0.5 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e ed 25 59 75 f3 08 06
	[  +3.912883] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 62 ba d6 13 c3 e3 08 06
	[  +2.709643] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 66 31 90 37 c7 08 06
	[  +0.019221] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff da 1d 22 9e 8e 47 08 06
	[  +9.151781] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 ca ad 28 d8 56 08 06
	[  +0.348439] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 59 84 5e b0 7b 08 06
	[  +0.569834] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e c1 ff 28 29 42 08 06
	
	
	==> etcd [891452784bf9] <==
	{"level":"info","ts":"2024-09-23T10:36:28.600975Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601004Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:36:28.601085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:36:28.601891Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602013Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:36:28.602702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.239:2379"}
	{"level":"info","ts":"2024-09-23T10:36:28.603219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:36:44.242056Z","caller":"traceutil/trace.go:171","msg":"trace[1467056625] linearizableReadLoop","detail":"{readStateIndex:849; appliedIndex:845; }","duration":"128.026224ms","start":"2024-09-23T10:36:44.114013Z","end":"2024-09-23T10:36:44.242039Z","steps":["trace[1467056625] 'read index received'  (duration: 46.430648ms)","trace[1467056625] 'applied index is now lower than readState.Index'  (duration: 81.594963ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:36:44.242093Z","caller":"traceutil/trace.go:171","msg":"trace[2126161537] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"134.824059ms","start":"2024-09-23T10:36:44.107242Z","end":"2024-09-23T10:36:44.242066Z","steps":["trace[2126161537] 'process raft request'  (duration: 123.210784ms)","trace[2126161537] 'compare'  (duration: 11.439426ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:36:44.242290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.188403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:36:44.242444Z","caller":"traceutil/trace.go:171","msg":"trace[1472265816] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:832; }","duration":"128.418389ms","start":"2024-09-23T10:36:44.114009Z","end":"2024-09-23T10:36:44.242428Z","steps":["trace[1472265816] 'agreement among raft nodes before linearized reading'  (duration: 128.138624ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:36:44.242340Z","caller":"traceutil/trace.go:171","msg":"trace[1535126050] transaction","detail":"{read_only:false; response_revision:832; number_of_response:1; }","duration":"133.407624ms","start":"2024-09-23T10:36:44.108904Z","end":"2024-09-23T10:36:44.242312Z","steps":["trace[1535126050] 'process raft request'  (duration: 133.085569ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:46:28.621172Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1493}
	{"level":"info","ts":"2024-09-23T10:46:28.644160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1493,"took":"22.540162ms","hash":974073395,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":3624960,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T10:46:28.644213Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":974073395,"revision":1493,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T10:51:28.626660Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1885}
	{"level":"info","ts":"2024-09-23T10:51:28.643237Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1885,"took":"16.000986ms","hash":3586383635,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":3063808,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-09-23T10:51:28.643296Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3586383635,"revision":1885,"compact-revision":1493}
	{"level":"info","ts":"2024-09-23T10:56:28.631649Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2378}
	{"level":"info","ts":"2024-09-23T10:56:28.648921Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2378,"took":"16.765171ms","hash":2357731407,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":2879488,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-09-23T10:56:28.648985Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2357731407,"revision":2378,"compact-revision":1885}
	{"level":"info","ts":"2024-09-23T11:01:28.637301Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2796}
	{"level":"info","ts":"2024-09-23T11:01:28.654262Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2796,"took":"16.482321ms","hash":3495905728,"current-db-size-bytes":7499776,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":2437120,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-09-23T11:01:28.654323Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3495905728,"revision":2796,"compact-revision":2378}
	
	
	==> kernel <==
	 11:04:16 up 1 day, 16:46,  0 users,  load average: 0.02, 0.08, 0.32
	Linux ubuntu-20-agent-12 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [98649c04ed19] <==
	W0923 11:00:47.717630       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:00:47.717689       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:00:47.719296       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:00:47.719316       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:01:42.974063       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:01:42.974107       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:01:42.976618       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:01:47.727865       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:01:47.727914       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:01:47.727865       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:01:47.727946       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:01:47.729543       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:01:47.729543       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:02:47.738072       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:02:47.738125       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:02:47.738073       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:02:47.738175       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:02:47.740574       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:02:47.740595       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:03:47.748485       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:03:47.748542       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:47.748496       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.229.99:443: connect: connection refused
	E0923 11:03:47.748572       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.229.99:443: connect: connection refused" logger="UnhandledError"
	W0923 11:03:47.750164       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	W0923 11:03:47.750181       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.63.72:443: connect: connection refused
	
	
	==> kube-controller-manager [e008cb9d44fc] <==
	E0923 11:01:47.731429       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 11:01:47.731432       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	W0923 11:01:48.059485       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:01:48.059548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:02:33.575872       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:02:33.575922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 11:02:47.741148       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 11:02:47.741192       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 11:02:47.742345       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 11:02:47.742351       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 11:03:04.599201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ubuntu-20-agent-12"
	W0923 11:03:12.705535       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:03:12.705592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 11:03:29.172410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="110.951µs"
	I0923 11:03:30.172484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="75.936µs"
	I0923 11:03:37.171801       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 11:03:40.172001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="69.541µs"
	I0923 11:03:45.172882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="81.921µs"
	E0923 11:03:47.750739       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 11:03:47.750779       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 11:03:47.751916       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	E0923 11:03:47.751956       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.102.63.72:443: connect: connection refused" logger="UnhandledError"
	I0923 11:03:50.172851       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	W0923 11:03:50.447980       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:03:50.448030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [045fad5ce6ab] <==
	I0923 10:36:38.573406       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:36:38.729619       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.239"]
	E0923 10:36:38.729768       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:36:38.818441       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:36:38.818516       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:36:38.825889       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:36:38.826286       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:36:38.826330       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:36:38.829447       1 config.go:328] "Starting node config controller"
	I0923 10:36:38.829476       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:36:38.830499       1 config.go:199] "Starting service config controller"
	I0923 10:36:38.830549       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:36:38.830606       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:36:38.830612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:36:38.931771       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:36:38.931860       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:36:38.938436       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cefe11af8e63] <==
	W0923 10:36:30.422004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:36:30.422053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.448133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:36:30.448193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.597590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:36:30.597642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.627316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.627362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.638928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:36:30.638980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.639681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:36:30.639714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.656288       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:36:30.656331       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:36:30.673851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.673901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.732651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.732705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.750217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:36:30.750269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.788871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:36:30.788927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:36:30.793547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:36:30.793590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:36:32.724371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 11:04:16 UTC. --
	Sep 23 11:03:16 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:16.306441 1590014 kuberuntime_image.go:55] "Failed to pull image" err="no such image: \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\"" image="docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 11:03:16 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:16.306560 1590014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{
Name:kube-api-access-f8qhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-79dc4b78bb-zdd4g_volcano-system(710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a): ErrImagePull: no such image: \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\"" logger="UnhandledError"
	Sep 23 11:03:16 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:16.307855 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"no such image: \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 11:03:17 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:17.287604 1590014 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = no such image: \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\"" image="docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 11:03:17 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:17.287682 1590014 kuberuntime_image.go:55] "Failed to pull image" err="no such image: \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\"" image="docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 11:03:17 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:17.287826 1590014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-controllers,Image:docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de,Command:[],Args:[--logtostderr --enable-healthz=true --leader-elect=false -v=4 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjzd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-controllers-5ff7c5d4db-529t5_volcano-system(8629f94d-7406-49a9-9400-2127546ff73a): ErrImagePull: no such image: \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\"" logger="UnhandledError"
	Sep 23 11:03:17 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:17.289036 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ErrImagePull: \"no such image: \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 11:03:21 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:21.272406 1590014 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = no such image: \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\"" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 11:03:21 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:21.272468 1590014 kuberuntime_image.go:55] "Failed to pull image" err="no such image: \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\"" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 11:03:21 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:21.272585 1590014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:main,Image:docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e,Command:[./gen-admission-secret.sh --service volcano-admission-service --namespace volcano-system --secret volcano-admission-secret],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvt4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},Termination
MessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-admission-init-gh7z4_volcano-system(0aacc128-e2fb-43a2-a10f-644572209858): ErrImagePull: no such image: \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\"" logger="UnhandledError"
	Sep 23 11:03:21 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:21.273810 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ErrImagePull: \"no such image: \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 11:03:29 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:29.164531 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 11:03:30 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:30.164173 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 11:03:37 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:37.164339 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 11:03:40 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:40.164457 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 11:03:43 ubuntu-20-agent-12 kubelet[1590014]: I0923 11:03:43.161814 1590014 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-p5xcl" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 11:03:45 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:45.164552 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 11:03:47 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:47.163152 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[admission-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="volcano-system/volcano-admission-7f54bd7598-rfghv" podUID="5bd93063-1d57-4569-b1ce-3b0c16811d04"
	Sep 23 11:03:50 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:50.164285 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 11:03:54 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:54.164191 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 11:03:56 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:03:56.164348 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	Sep 23 11:04:05 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:04:05.164275 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-gh7z4" podUID="0aacc128-e2fb-43a2-a10f-644572209858"
	Sep 23 11:04:07 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:04:07.164638 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-529t5" podUID="8629f94d-7406-49a9-9400-2127546ff73a"
	Sep 23 11:04:08 ubuntu-20-agent-12 kubelet[1590014]: I0923 11:04:08.162638 1590014 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-97lv7" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 11:04:11 ubuntu-20-agent-12 kubelet[1590014]: E0923 11:04:11.164547 1590014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-zdd4g" podUID="710bc9a3-ed4c-48d8-b3a8-f15c6bd3217a"
	
	
	==> storage-provisioner [a88800a1ce5b] <==
	I0923 10:36:38.418197       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:36:38.433696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:36:38.433749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:36:38.445674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:36:38.446763       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	I0923 10:36:38.449267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35a6bb7a-1e48-4bf9-816a-2d141c61bd81", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0 became leader
	I0923 10:36:38.547698       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_b26042fa-fd91-4f6e-b480-1072c860b1f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1 (73.402946ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-rfghv" not found
	Error from server (NotFound): pods "volcano-admission-init-gh7z4" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-529t5" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-zdd4g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod volcano-admission-7f54bd7598-rfghv volcano-admission-init-gh7z4 volcano-controllers-5ff7c5d4db-529t5 volcano-scheduler-79dc4b78bb-zdd4g: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (481.97s)

                                                
                                    

Test pass (100/166)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.29
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 0.85
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 67.71
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 385.42
35 TestAddons/parallel/InspektorGadget 10.5
36 TestAddons/parallel/MetricsServer 5.41
40 TestAddons/parallel/CloudSpanner 5.28
42 TestAddons/parallel/NvidiaDevicePlugin 5.26
43 TestAddons/parallel/Yakd 11.47
44 TestAddons/StoppedEnableDisable 10.73
46 TestCertExpiration 228.36
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 28.97
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 33.28
61 TestFunctional/serial/KubeContext 0.05
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.12
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
66 TestFunctional/serial/ExtraConfig 38.54
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.85
69 TestFunctional/serial/LogsFileCmd 0.9
70 TestFunctional/serial/InvalidService 3.94
72 TestFunctional/parallel/ConfigCmd 0.29
73 TestFunctional/parallel/DashboardCmd 7.91
74 TestFunctional/parallel/DryRun 0.17
75 TestFunctional/parallel/InternationalLanguage 0.09
76 TestFunctional/parallel/StatusCmd 0.46
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.23
80 TestFunctional/parallel/ProfileCmd/profile_list 0.22
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.22
83 TestFunctional/parallel/ServiceCmd/DeployApp 10.16
84 TestFunctional/parallel/ServiceCmd/List 0.34
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
87 TestFunctional/parallel/ServiceCmd/Format 0.16
88 TestFunctional/parallel/ServiceCmd/URL 0.16
89 TestFunctional/parallel/ServiceCmdConnect 8.33
90 TestFunctional/parallel/AddonsCmd 0.12
91 TestFunctional/parallel/PersistentVolumeClaim 24.4
104 TestFunctional/parallel/MySQL 20.72
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.43
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.64
113 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/Version/short 0.05
118 TestFunctional/parallel/Version/components 0.42
119 TestFunctional/parallel/License 0.16
120 TestFunctional/delete_echo-server_images 0.03
121 TestFunctional/delete_my-image_image 0.02
122 TestFunctional/delete_minikube_cached_images 0.01
127 TestImageBuild/serial/Setup 14.45
128 TestImageBuild/serial/NormalBuild 0.93
129 TestImageBuild/serial/BuildWithBuildArg 0.64
130 TestImageBuild/serial/BuildWithDockerIgnore 0.41
131 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.4
135 TestJSONOutput/start/Command 31.35
136 TestJSONOutput/start/Audit 0
138 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/pause/Command 0.53
142 TestJSONOutput/pause/Audit 0
144 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/unpause/Command 0.43
148 TestJSONOutput/unpause/Audit 0
150 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/stop/Command 10.48
154 TestJSONOutput/stop/Audit 0
156 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
158 TestErrorJSONOutput 0.21
163 TestMainNoArgs 0.05
164 TestMinikubeProfile 34.5
172 TestPause/serial/Start 29.02
173 TestPause/serial/SecondStartNoReconfiguration 32.58
174 TestPause/serial/Pause 0.5
175 TestPause/serial/VerifyStatus 0.13
176 TestPause/serial/Unpause 0.44
177 TestPause/serial/PauseAgain 0.53
178 TestPause/serial/DeletePaused 1.65
179 TestPause/serial/VerifyDeletedResources 0.07
193 TestRunningBinaryUpgrade 79.19
195 TestStoppedBinaryUpgrade/Setup 0.52
196 TestStoppedBinaryUpgrade/Upgrade 51.3
197 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
198 TestKubernetesUpgrade 304.74
x
+
TestDownloadOnly/v1.20.0/json-events (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.293052831s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (69.19647ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:35:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:35:07.699308 1584546 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:35:07.699447 1584546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:35:07.699457 1584546 out.go:358] Setting ErrFile to fd 2...
	I0923 10:35:07.699461 1584546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:35:07.699666 1584546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	W0923 10:35:07.699788 1584546 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19688-1577701/.minikube/config/config.json: open /home/jenkins/minikube-integration/19688-1577701/.minikube/config/config.json: no such file or directory
	I0923 10:35:07.700374 1584546 out.go:352] Setting JSON to true
	I0923 10:35:07.701357 1584546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":145059,"bootTime":1726942649,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:35:07.701485 1584546 start.go:139] virtualization: kvm guest
	I0923 10:35:07.704041 1584546 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:35:07.704184 1584546 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:35:07.704222 1584546 notify.go:220] Checking for updates...
	I0923 10:35:07.705740 1584546 out.go:169] MINIKUBE_LOCATION=19688
	I0923 10:35:07.707301 1584546 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:35:07.708868 1584546 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:35:07.710343 1584546 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 10:35:07.711637 1584546 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (61.489416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC | 23 Sep 24 10:35 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:35 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:35:09
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:35:09.340174 1584698 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:35:09.340475 1584698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:35:09.340484 1584698 out.go:358] Setting ErrFile to fd 2...
	I0923 10:35:09.340488 1584698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:35:09.340690 1584698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	I0923 10:35:09.341360 1584698 out.go:352] Setting JSON to true
	I0923 10:35:09.342309 1584698 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":145060,"bootTime":1726942649,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:35:09.342428 1584698 start.go:139] virtualization: kvm guest
	I0923 10:35:09.344847 1584698 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:35:09.344977 1584698 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:35:09.345033 1584698 notify.go:220] Checking for updates...
	I0923 10:35:09.346617 1584698 out.go:169] MINIKUBE_LOCATION=19688
	I0923 10:35:09.348333 1584698 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:35:09.349847 1584698 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 10:35:09.351423 1584698 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 10:35:09.352893 1584698 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:35:10.740302 1584534 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:42273 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (67.71s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m5.965326203s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.742889817s)
--- PASS: TestOffline (67.71s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (51.462689ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (51.858418ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (385.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (6m25.419474385s)
--- PASS: TestAddons/Setup (385.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cc7cr" [25f9725e-0663-4ecf-bd22-662c6d69802a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004516778s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.496616973s)
--- PASS: TestAddons/parallel/InspektorGadget (10.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.033831ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-l8xpt" [be83f637-49a0-4d61-b588-544359407926] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004294198s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-97lv7" [e9ffea0b-6716-4709-8e55-153a51669278] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004218551s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rmgc2" [7b196bf3-bd4c-4575-9cd3-d1c7adf5e6be] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004290065s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-j4j2x" [e007067d-76a6-4e29-a10c-268b651e080d] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004352047s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.467259582s)
--- PASS: TestAddons/parallel/Yakd (11.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.410369323s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.73s)

                                                
                                    
x
+
TestCertExpiration (228.36s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.935687521s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.633079852s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.793422348s)
--- PASS: TestCertExpiration (228.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19688-1577701/.minikube/files/etc/test/nested/copy/1584534/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (28.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (28.972360912s)
--- PASS: TestFunctional/serial/StartWithProxy (28.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 11:09:08.241010 1584534 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (33.283138951s)
functional_test.go:663: soft start took 33.283960542s for "minikube" cluster.
I0923 11:09:41.524548 1584534 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.538847442s)
functional_test.go:761: restart took 38.538961463s for "minikube" cluster.
I0923 11:10:20.411381 1584534 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2954028247/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (178.878856ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://10.128.15.239:32178 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (47.362157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (45.89164ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/23 11:10:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1619931: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (84.409996ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:10:34.421384 1620300 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:10:34.421522 1620300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:10:34.421533 1620300 out.go:358] Setting ErrFile to fd 2...
	I0923 11:10:34.421540 1620300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:10:34.421748 1620300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	I0923 11:10:34.422337 1620300 out.go:352] Setting JSON to false
	I0923 11:10:34.423361 1620300 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":147185,"bootTime":1726942649,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:10:34.423484 1620300 start.go:139] virtualization: kvm guest
	I0923 11:10:34.425948 1620300 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 11:10:34.427319 1620300 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 11:10:34.427335 1620300 notify.go:220] Checking for updates...
	I0923 11:10:34.427357 1620300 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 11:10:34.428687 1620300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:10:34.429914 1620300 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 11:10:34.431354 1620300 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 11:10:34.432589 1620300 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:10:34.433751 1620300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:10:34.435450 1620300 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:10:34.435757 1620300 exec_runner.go:51] Run: systemctl --version
	I0923 11:10:34.438536 1620300 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:10:34.451116 1620300 out.go:177] * Using the none driver based on existing profile
	I0923 11:10:34.452594 1620300 start.go:297] selected driver: none
	I0923 11:10:34.452617 1620300 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:10:34.452835 1620300 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:10:34.452863 1620300 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 11:10:34.453296 1620300 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0923 11:10:34.455442 1620300 out.go:201] 
	W0923 11:10:34.456826 1620300 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 11:10:34.458022 1620300 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (87.355272ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:10:34.590153 1620331 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:10:34.590305 1620331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:10:34.590316 1620331 out.go:358] Setting ErrFile to fd 2...
	I0923 11:10:34.590321 1620331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:10:34.590642 1620331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-1577701/.minikube/bin
	I0923 11:10:34.591252 1620331 out.go:352] Setting JSON to false
	I0923 11:10:34.592286 1620331 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":147186,"bootTime":1726942649,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:10:34.592406 1620331 start.go:139] virtualization: kvm guest
	I0923 11:10:34.594617 1620331 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0923 11:10:34.596038 1620331 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-1577701/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 11:10:34.596069 1620331 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 11:10:34.596118 1620331 notify.go:220] Checking for updates...
	I0923 11:10:34.598618 1620331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:10:34.600117 1620331 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	I0923 11:10:34.601568 1620331 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	I0923 11:10:34.602993 1620331 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:10:34.604445 1620331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:10:34.606012 1620331 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:10:34.606343 1620331 exec_runner.go:51] Run: systemctl --version
	I0923 11:10:34.609255 1620331 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:10:34.621176 1620331 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0923 11:10:34.622528 1620331 start.go:297] selected driver: none
	I0923 11:10:34.622548 1620331 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:10:34.622661 1620331 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:10:34.622693 1620331 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 11:10:34.622987 1620331 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0923 11:10:34.625573 1620331 out.go:201] 
	W0923 11:10:34.626939 1620331 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 11:10:34.628318 1620331 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "171.653835ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.716328ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "166.472836ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.360489ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-k2bgn" [f26f8a5c-73a8-42e7-a043-039fe412929c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-k2bgn" [f26f8a5c-73a8-42e7-a043-039fe412929c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003410022s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "342.523847ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.128.15.239:30664
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.128.15.239:30664
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-shkkh" [15301141-6f8b-487a-b6de-72adb54442e4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-shkkh" [15301141-6f8b-487a-b6de-72adb54442e4] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004154408s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.128.15.239:32483
functional_test.go:1675: http://10.128.15.239:32483: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-shkkh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.128.15.239:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.128.15.239:32483
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [73e7c90a-3e03-40a7-a57a-b60275001d23] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003878203s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f6083eff-1c37-4fde-a30f-bd0657c9d671] Pending
helpers_test.go:344: "sp-pod" [f6083eff-1c37-4fde-a30f-bd0657c9d671] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f6083eff-1c37-4fde-a30f-bd0657c9d671] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003905165s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3ee5a37c-fff9-47d2-9ed4-1777a594992d] Pending
helpers_test.go:344: "sp-pod" [3ee5a37c-fff9-47d2-9ed4-1777a594992d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3ee5a37c-fff9-47d2-9ed4-1777a594992d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004413082s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-jptkx" [052b063a-7259-4ef5-845d-12f008f96f9a] Pending
helpers_test.go:344: "mysql-6cdb49bbb-jptkx" [052b063a-7259-4ef5-845d-12f008f96f9a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-jptkx" [052b063a-7259-4ef5-845d-12f008f96f9a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004188074s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jptkx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-jptkx -- mysql -ppassword -e "show databases;": exit status 1 (162.628611ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:11:36.338546 1584534 retry.go:31] will retry after 1.040134907s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jptkx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-jptkx -- mysql -ppassword -e "show databases;": exit status 1 (115.558956ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:11:37.495459 1584534 retry.go:31] will retry after 792.982489ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jptkx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-jptkx -- mysql -ppassword -e "show databases;": exit status 1 (112.333935ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 11:11:38.402017 1584534 retry.go:31] will retry after 2.20684301s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jptkx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.42826395s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.642376175s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.446727941s)
--- PASS: TestImageBuild/serial/Setup (14.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
--- PASS: TestImageBuild/serial/NormalBuild (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.40s)

                                                
                                    
x
+
TestJSONOutput/start/Command (31.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (31.353827412s)
--- PASS: TestJSONOutput/start/Command (31.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.48s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.476766247s)
--- PASS: TestJSONOutput/stop/Command (10.48s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.605322ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ad2f2c6d-d92b-448b-b52d-2fb2fd4c3794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"159e681b-85db-4746-bb8f-88bea435441b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19688"}}
	{"specversion":"1.0","id":"c8b58f10-c11b-4d42-9546-94e03d573796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aeaae94f-8319-4e1f-b43a-94a3fc881864","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig"}}
	{"specversion":"1.0","id":"514df21c-f396-4d02-88a4-a2dafeb54d34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube"}}
	{"specversion":"1.0","id":"16710ed3-a274-48fb-8b0e-e071673a8371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f5f1f314-09ba-44c8-9861-81d9a02de2c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6c2b32e-550c-4d8b-94dd-856a1f61a367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.461395439s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.019609226s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.377707042s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.50s)

                                                
                                    
x
+
TestPause/serial/Start (29.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (29.015185963s)
--- PASS: TestPause/serial/Start (29.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (32.578476052s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.58s)

                                                
                                    
x
+
TestPause/serial/Pause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (130.155864ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.44s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.44s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.65486447s)
--- PASS: TestPause/serial/DeletePaused (1.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4116911981 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4116911981 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (29.058749645s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (46.658647472s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.013640924s)
--- PASS: TestRunningBinaryUpgrade (79.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2766535054 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2766535054 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.581756209s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2766535054 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2766535054 -p minikube stop: (23.650508901s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.062380253s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (304.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.290004644s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.289668141s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (76.404114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m17.071755999s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (79.07203ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-1577701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-1577701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.601633104s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.270175272s)
--- PASS: TestKubernetesUpgrade (304.74s)

                                                
                                    

Test skip (61/166)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
97 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
98 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
102 TestFunctional/parallel/SSHCmd 0
103 TestFunctional/parallel/CpCmd 0
105 TestFunctional/parallel/FileSync 0
106 TestFunctional/parallel/CertSync 0
111 TestFunctional/parallel/DockerEnv 0
112 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/ImageCommands 0
115 TestFunctional/parallel/NonActiveRuntimeDisabled 0
123 TestGvisorAddon 0
124 TestMultiControlPlane 0
132 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
159 TestKicCustomNetwork 0
160 TestKicExistingNetwork 0
161 TestKicCustomSubnet 0
162 TestKicStaticIP 0
165 TestMountStart 0
166 TestMultiNode 0
167 TestNetworkPlugins 0
168 TestNoKubernetes 0
169 TestChangeNoneUser 0
180 TestPreload 0
181 TestScheduledStopWindows 0
182 TestScheduledStopUnix 0
183 TestSkaffold 0
186 TestStartStop/group/old-k8s-version 0.13
187 TestStartStop/group/newest-cni 0.13
188 TestStartStop/group/default-k8s-diff-port 0.14
189 TestStartStop/group/no-preload 0.13
190 TestStartStop/group/disable-driver-mounts 0.13
191 TestStartStop/group/embed-certs 0.14
192 TestInsufficientStorage 0
199 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard