Test Report: KVM_Linux_containerd 20991

                    
                      850300a2a1d8334a3437f5af90c59ac17fc542af:2025-06-30:40237
                    
                

Test fail (5/330)

Order failed test Duration
29 TestAddons/serial/Volcano 374.36
37 TestAddons/parallel/Ingress 492.4
41 TestAddons/parallel/CSI 379.96
44 TestAddons/parallel/LocalPath 345.78
91 TestFunctional/parallel/DashboardCmd 302.35
x
+
TestAddons/serial/Volcano (374.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 22.821034ms
addons_test.go:868: volcano-scheduler stabilized in 23.019188ms
addons_test.go:884: volcano-controller stabilized in 23.093917ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-854568c9bb-jfhvt" [e37a78c0-cf90-49a3-bdb1-32ceb4f43f52] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-scheduler" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:890: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:890: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412730 -n addons-412730
addons_test.go:890: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-06-30 14:14:24.545684662 +0000 UTC m=+514.228811609
addons_test.go:890: (dbg) Run:  kubectl --context addons-412730 describe po volcano-scheduler-854568c9bb-jfhvt -n volcano-system
addons_test.go:890: (dbg) kubectl --context addons-412730 describe po volcano-scheduler-854568c9bb-jfhvt -n volcano-system:
Name:                 volcano-scheduler-854568c9bb-jfhvt
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 addons-412730/192.168.39.114
Start Time:           Mon, 30 Jun 2025 14:07:10 +0000
Labels:               app=volcano-scheduler
pod-template-hash=854568c9bb
Annotations:          <none>
Status:               Pending
SeccompProfile:       RuntimeDefault
IP:                   10.244.0.19
IPs:
IP:           10.244.0.19
Controlled By:  ReplicaSet/volcano-scheduler-854568c9bb
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwgmd (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-xwgmd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  7m14s                  default-scheduler  Successfully assigned volcano-system/volcano-scheduler-854568c9bb-jfhvt to addons-412730
Normal   Pulling    3m28s (x5 over 7m12s)  kubelet            Pulling image "docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2"
Warning  Failed     3m28s (x5 over 6m30s)  kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2": failed to pull and unpack image "docker.io/volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m28s (x5 over 6m30s)  kubelet            Error: ErrImagePull
Warning  Failed     76s (x20 over 6m30s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    65s (x21 over 6m30s)   kubelet            Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2"
addons_test.go:890: (dbg) Run:  kubectl --context addons-412730 logs volcano-scheduler-854568c9bb-jfhvt -n volcano-system
addons_test.go:890: (dbg) Non-zero exit: kubectl --context addons-412730 logs volcano-scheduler-854568c9bb-jfhvt -n volcano-system: exit status 1 (69.265274ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-854568c9bb-jfhvt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:890: kubectl --context addons-412730 logs volcano-scheduler-854568c9bb-jfhvt -n volcano-system: exit status 1
addons_test.go:891: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-412730 -n addons-412730
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 logs -n 25: (1.541737743s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:05 UTC |                     |
	|         | -p download-only-083943              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-083943              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | -o=json --download-only              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | -p download-only-480082              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-083943              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | --download-only -p                   | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | binary-mirror-278166                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42597               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-278166              | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| addons  | disable dashboard -p                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| start   | -p addons-412730 --wait=true         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:08 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:06:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:06:06.240063 1460091 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:06:06.240209 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240221 1460091 out.go:358] Setting ErrFile to fd 2...
	I0630 14:06:06.240225 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240435 1460091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:06:06.241146 1460091 out.go:352] Setting JSON to false
	I0630 14:06:06.242162 1460091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49689,"bootTime":1751242677,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:06:06.242287 1460091 start.go:140] virtualization: kvm guest
	I0630 14:06:06.244153 1460091 out.go:177] * [addons-412730] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:06:06.245583 1460091 notify.go:220] Checking for updates...
	I0630 14:06:06.245617 1460091 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:06:06.246864 1460091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:06:06.248249 1460091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:06.249601 1460091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.251003 1460091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:06:06.252187 1460091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:06:06.253562 1460091 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:06:06.289858 1460091 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:06:06.291153 1460091 start.go:304] selected driver: kvm2
	I0630 14:06:06.291176 1460091 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:06:06.291195 1460091 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:06:06.292048 1460091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.292142 1460091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1452140/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:06:06.309060 1460091 install.go:137] /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:06:06.309119 1460091 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:06:06.309429 1460091 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:06:06.309479 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:06.309532 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:06.309546 1460091 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:06:06.309617 1460091 start.go:347] cluster config:
	{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0630 14:06:06.309739 1460091 iso.go:125] acquiring lock: {Name:mk3f178100d94eda06013511859d36adab64257f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.311683 1460091 out.go:177] * Starting "addons-412730" primary control-plane node in "addons-412730" cluster
	I0630 14:06:06.313225 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:06.313276 1460091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4
	I0630 14:06:06.313292 1460091 cache.go:56] Caching tarball of preloaded images
	I0630 14:06:06.313420 1460091 preload.go:172] Found /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0630 14:06:06.313435 1460091 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on containerd
	I0630 14:06:06.313766 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:06.313798 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json: {Name:mk9a7a41f109a1f3b7b9e5a38a0e2a1bce3a8d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:06.313975 1460091 start.go:360] acquireMachinesLock for addons-412730: {Name:mkb4b5035f5dd19ed6df4556a284e7c795570454 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:06:06.314058 1460091 start.go:364] duration metric: took 65.368µs to acquireMachinesLock for "addons-412730"
	I0630 14:06:06.314084 1460091 start.go:93] Provisioning new machine with config: &{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:06.314172 1460091 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:06:06.316769 1460091 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:06:06.316975 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:06.317044 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:06.332767 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0630 14:06:06.333480 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:06.334061 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:06.334083 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:06.334504 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:06.334801 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:06.335019 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:06.335217 1460091 start.go:159] libmachine.API.Create for "addons-412730" (driver="kvm2")
	I0630 14:06:06.335248 1460091 client.go:168] LocalClient.Create starting
	I0630 14:06:06.335289 1460091 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem
	I0630 14:06:06.483712 1460091 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem
	I0630 14:06:06.592251 1460091 main.go:141] libmachine: Running pre-create checks...
	I0630 14:06:06.592287 1460091 main.go:141] libmachine: (addons-412730) Calling .PreCreateCheck
	I0630 14:06:06.592947 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:06.593668 1460091 main.go:141] libmachine: Creating machine...
	I0630 14:06:06.593697 1460091 main.go:141] libmachine: (addons-412730) Calling .Create
	I0630 14:06:06.594139 1460091 main.go:141] libmachine: (addons-412730) creating KVM machine...
	I0630 14:06:06.594168 1460091 main.go:141] libmachine: (addons-412730) creating network...
	I0630 14:06:06.595936 1460091 main.go:141] libmachine: (addons-412730) DBG | found existing default KVM network
	I0630 14:06:06.596779 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.596550 1460113 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020ef20}
	I0630 14:06:06.596808 1460091 main.go:141] libmachine: (addons-412730) DBG | created network xml: 
	I0630 14:06:06.596818 1460091 main.go:141] libmachine: (addons-412730) DBG | <network>
	I0630 14:06:06.596822 1460091 main.go:141] libmachine: (addons-412730) DBG |   <name>mk-addons-412730</name>
	I0630 14:06:06.596828 1460091 main.go:141] libmachine: (addons-412730) DBG |   <dns enable='no'/>
	I0630 14:06:06.596832 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596839 1460091 main.go:141] libmachine: (addons-412730) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:06:06.596851 1460091 main.go:141] libmachine: (addons-412730) DBG |     <dhcp>
	I0630 14:06:06.596865 1460091 main.go:141] libmachine: (addons-412730) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:06:06.596872 1460091 main.go:141] libmachine: (addons-412730) DBG |     </dhcp>
	I0630 14:06:06.596877 1460091 main.go:141] libmachine: (addons-412730) DBG |   </ip>
	I0630 14:06:06.596883 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596888 1460091 main.go:141] libmachine: (addons-412730) DBG | </network>
	I0630 14:06:06.596897 1460091 main.go:141] libmachine: (addons-412730) DBG | 
	I0630 14:06:06.602938 1460091 main.go:141] libmachine: (addons-412730) DBG | trying to create private KVM network mk-addons-412730 192.168.39.0/24...
	I0630 14:06:06.682845 1460091 main.go:141] libmachine: (addons-412730) DBG | private KVM network mk-addons-412730 192.168.39.0/24 created
	I0630 14:06:06.682892 1460091 main.go:141] libmachine: (addons-412730) setting up store path in /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:06.682905 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.682807 1460113 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.682951 1460091 main.go:141] libmachine: (addons-412730) building disk image from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:06:06.682983 1460091 main.go:141] libmachine: (addons-412730) Downloading /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:06:06.983317 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.983139 1460113 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa...
	I0630 14:06:07.030013 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029839 1460113 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk...
	I0630 14:06:07.030043 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing magic tar header
	I0630 14:06:07.030053 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing SSH key tar header
	I0630 14:06:07.030061 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029966 1460113 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:07.030071 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730
	I0630 14:06:07.030150 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 (perms=drwx------)
	I0630 14:06:07.030175 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:06:07.030186 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines
	I0630 14:06:07.030199 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube (perms=drwxr-xr-x)
	I0630 14:06:07.030230 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140 (perms=drwxrwxr-x)
	I0630 14:06:07.030243 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:06:07.030249 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:07.030257 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140
	I0630 14:06:07.030272 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:06:07.030284 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:06:07.030316 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins
	I0630 14:06:07.030332 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home
	I0630 14:06:07.030374 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:07.030392 1460091 main.go:141] libmachine: (addons-412730) DBG | skipping /home - not owner
	I0630 14:06:07.031398 1460091 main.go:141] libmachine: (addons-412730) define libvirt domain using xml: 
	I0630 14:06:07.031420 1460091 main.go:141] libmachine: (addons-412730) <domain type='kvm'>
	I0630 14:06:07.031429 1460091 main.go:141] libmachine: (addons-412730)   <name>addons-412730</name>
	I0630 14:06:07.031435 1460091 main.go:141] libmachine: (addons-412730)   <memory unit='MiB'>4096</memory>
	I0630 14:06:07.031443 1460091 main.go:141] libmachine: (addons-412730)   <vcpu>2</vcpu>
	I0630 14:06:07.031449 1460091 main.go:141] libmachine: (addons-412730)   <features>
	I0630 14:06:07.031457 1460091 main.go:141] libmachine: (addons-412730)     <acpi/>
	I0630 14:06:07.031472 1460091 main.go:141] libmachine: (addons-412730)     <apic/>
	I0630 14:06:07.031484 1460091 main.go:141] libmachine: (addons-412730)     <pae/>
	I0630 14:06:07.031495 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.031506 1460091 main.go:141] libmachine: (addons-412730)   </features>
	I0630 14:06:07.031515 1460091 main.go:141] libmachine: (addons-412730)   <cpu mode='host-passthrough'>
	I0630 14:06:07.031524 1460091 main.go:141] libmachine: (addons-412730)   
	I0630 14:06:07.031534 1460091 main.go:141] libmachine: (addons-412730)   </cpu>
	I0630 14:06:07.031544 1460091 main.go:141] libmachine: (addons-412730)   <os>
	I0630 14:06:07.031554 1460091 main.go:141] libmachine: (addons-412730)     <type>hvm</type>
	I0630 14:06:07.031563 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='cdrom'/>
	I0630 14:06:07.031572 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='hd'/>
	I0630 14:06:07.031581 1460091 main.go:141] libmachine: (addons-412730)     <bootmenu enable='no'/>
	I0630 14:06:07.031597 1460091 main.go:141] libmachine: (addons-412730)   </os>
	I0630 14:06:07.031609 1460091 main.go:141] libmachine: (addons-412730)   <devices>
	I0630 14:06:07.031619 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='cdrom'>
	I0630 14:06:07.031636 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/boot2docker.iso'/>
	I0630 14:06:07.031647 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hdc' bus='scsi'/>
	I0630 14:06:07.031659 1460091 main.go:141] libmachine: (addons-412730)       <readonly/>
	I0630 14:06:07.031667 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031679 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='disk'>
	I0630 14:06:07.031689 1460091 main.go:141] libmachine: (addons-412730)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:06:07.031737 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk'/>
	I0630 14:06:07.031764 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hda' bus='virtio'/>
	I0630 14:06:07.031774 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031792 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031805 1460091 main.go:141] libmachine: (addons-412730)       <source network='mk-addons-412730'/>
	I0630 14:06:07.031820 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031854 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031878 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031890 1460091 main.go:141] libmachine: (addons-412730)       <source network='default'/>
	I0630 14:06:07.031901 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031909 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031919 1460091 main.go:141] libmachine: (addons-412730)     <serial type='pty'>
	I0630 14:06:07.031927 1460091 main.go:141] libmachine: (addons-412730)       <target port='0'/>
	I0630 14:06:07.031942 1460091 main.go:141] libmachine: (addons-412730)     </serial>
	I0630 14:06:07.031951 1460091 main.go:141] libmachine: (addons-412730)     <console type='pty'>
	I0630 14:06:07.031964 1460091 main.go:141] libmachine: (addons-412730)       <target type='serial' port='0'/>
	I0630 14:06:07.031975 1460091 main.go:141] libmachine: (addons-412730)     </console>
	I0630 14:06:07.031982 1460091 main.go:141] libmachine: (addons-412730)     <rng model='virtio'>
	I0630 14:06:07.031995 1460091 main.go:141] libmachine: (addons-412730)       <backend model='random'>/dev/random</backend>
	I0630 14:06:07.032001 1460091 main.go:141] libmachine: (addons-412730)     </rng>
	I0630 14:06:07.032011 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032016 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032026 1460091 main.go:141] libmachine: (addons-412730)   </devices>
	I0630 14:06:07.032034 1460091 main.go:141] libmachine: (addons-412730) </domain>
	I0630 14:06:07.032066 1460091 main.go:141] libmachine: (addons-412730) 
	I0630 14:06:07.037044 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:0d:7b:07 in network default
	I0630 14:06:07.037851 1460091 main.go:141] libmachine: (addons-412730) starting domain...
	I0630 14:06:07.037899 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:07.037908 1460091 main.go:141] libmachine: (addons-412730) ensuring networks are active...
	I0630 14:06:07.038725 1460091 main.go:141] libmachine: (addons-412730) Ensuring network default is active
	I0630 14:06:07.039106 1460091 main.go:141] libmachine: (addons-412730) Ensuring network mk-addons-412730 is active
	I0630 14:06:07.039715 1460091 main.go:141] libmachine: (addons-412730) getting domain XML...
	I0630 14:06:07.040672 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:08.319736 1460091 main.go:141] libmachine: (addons-412730) waiting for IP...
	I0630 14:06:08.320757 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.321298 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.321358 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.321305 1460113 retry.go:31] will retry after 217.608702ms: waiting for domain to come up
	I0630 14:06:08.541088 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.541707 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.541732 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.541668 1460113 retry.go:31] will retry after 322.22603ms: waiting for domain to come up
	I0630 14:06:08.865505 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.865965 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.865994 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.865925 1460113 retry.go:31] will retry after 339.049792ms: waiting for domain to come up
	I0630 14:06:09.206655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.207155 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.207213 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.207148 1460113 retry.go:31] will retry after 478.054487ms: waiting for domain to come up
	I0630 14:06:09.686885 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.687397 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.687426 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.687347 1460113 retry.go:31] will retry after 663.338232ms: waiting for domain to come up
	I0630 14:06:10.352433 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:10.352917 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:10.352942 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:10.352876 1460113 retry.go:31] will retry after 824.880201ms: waiting for domain to come up
	I0630 14:06:11.179557 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:11.180050 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:11.180081 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:11.180000 1460113 retry.go:31] will retry after 1.072535099s: waiting for domain to come up
	I0630 14:06:12.253993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:12.254526 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:12.254560 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:12.254433 1460113 retry.go:31] will retry after 1.120902402s: waiting for domain to come up
	I0630 14:06:13.376695 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:13.377283 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:13.377315 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:13.377244 1460113 retry.go:31] will retry after 1.419759095s: waiting for domain to come up
	I0630 14:06:14.799069 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:14.799546 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:14.799574 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:14.799514 1460113 retry.go:31] will retry after 1.843918596s: waiting for domain to come up
	I0630 14:06:16.645512 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:16.646025 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:16.646082 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:16.646003 1460113 retry.go:31] will retry after 2.785739179s: waiting for domain to come up
	I0630 14:06:19.434426 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:19.435055 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:19.435086 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:19.434987 1460113 retry.go:31] will retry after 2.736128675s: waiting for domain to come up
	I0630 14:06:22.172470 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:22.173071 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:22.173092 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:22.173042 1460113 retry.go:31] will retry after 3.042875133s: waiting for domain to come up
	I0630 14:06:25.219310 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:25.219910 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:25.219934 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:25.219869 1460113 retry.go:31] will retry after 4.255226103s: waiting for domain to come up
	I0630 14:06:29.478898 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479625 1460091 main.go:141] libmachine: (addons-412730) found domain IP: 192.168.39.114
	I0630 14:06:29.479653 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has current primary IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479661 1460091 main.go:141] libmachine: (addons-412730) reserving static IP address...
	I0630 14:06:29.480160 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find host DHCP lease matching {name: "addons-412730", mac: "52:54:00:ac:59:ff", ip: "192.168.39.114"} in network mk-addons-412730
	I0630 14:06:29.563376 1460091 main.go:141] libmachine: (addons-412730) reserved static IP address 192.168.39.114 for domain addons-412730
	I0630 14:06:29.563409 1460091 main.go:141] libmachine: (addons-412730) waiting for SSH...
	I0630 14:06:29.563418 1460091 main.go:141] libmachine: (addons-412730) DBG | Getting to WaitForSSH function...
	I0630 14:06:29.566605 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567079 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.567114 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567268 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH client type: external
	I0630 14:06:29.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa (-rw-------)
	I0630 14:06:29.567351 1460091 main.go:141] libmachine: (addons-412730) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:06:29.567371 1460091 main.go:141] libmachine: (addons-412730) DBG | About to run SSH command:
	I0630 14:06:29.567386 1460091 main.go:141] libmachine: (addons-412730) DBG | exit 0
	I0630 14:06:29.697378 1460091 main.go:141] libmachine: (addons-412730) DBG | SSH cmd err, output: <nil>: 
	I0630 14:06:29.697644 1460091 main.go:141] libmachine: (addons-412730) KVM machine creation complete
	I0630 14:06:29.698028 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:29.698656 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.698905 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.699080 1460091 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:06:29.699098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:29.700512 1460091 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:06:29.700530 1460091 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:06:29.700538 1460091 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:06:29.700545 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.702878 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703363 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.703393 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703678 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.703917 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704093 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704253 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.704472 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.704757 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.704772 1460091 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:06:29.825352 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:29.825394 1460091 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:06:29.825405 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.828698 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829249 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.829291 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.829702 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.829910 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.830086 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.830284 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.830503 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.830515 1460091 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:06:29.950727 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:06:29.950815 1460091 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:06:29.950829 1460091 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:06:29.950838 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951114 1460091 buildroot.go:166] provisioning hostname "addons-412730"
	I0630 14:06:29.951153 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951406 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.954775 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955251 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.955283 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955448 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.955676 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.955864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.956131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.956359 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.956598 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.956616 1460091 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-412730 && echo "addons-412730" | sudo tee /etc/hostname
	I0630 14:06:30.091933 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-412730
	
	I0630 14:06:30.091974 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.095576 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.095967 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.095993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.096193 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.096420 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096640 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096775 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.096955 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.097249 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.097278 1460091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-412730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-412730/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-412730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:06:30.228409 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:30.228455 1460091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1452140/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1452140/.minikube}
	I0630 14:06:30.228507 1460091 buildroot.go:174] setting up certificates
	I0630 14:06:30.228539 1460091 provision.go:84] configureAuth start
	I0630 14:06:30.228557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:30.228999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.232598 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233018 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.233052 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233306 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.235934 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236310 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.236353 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236511 1460091 provision.go:143] copyHostCerts
	I0630 14:06:30.236588 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.pem (1078 bytes)
	I0630 14:06:30.236717 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/cert.pem (1123 bytes)
	I0630 14:06:30.236771 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/key.pem (1675 bytes)
	I0630 14:06:30.236826 1460091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem org=jenkins.addons-412730 san=[127.0.0.1 192.168.39.114 addons-412730 localhost minikube]
	I0630 14:06:30.629859 1460091 provision.go:177] copyRemoteCerts
	I0630 14:06:30.629936 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:06:30.629965 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.633589 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634037 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.634067 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634292 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.634709 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.634951 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.635149 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.732351 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:06:30.765263 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:06:30.797980 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:06:30.829589 1460091 provision.go:87] duration metric: took 601.031936ms to configureAuth
	I0630 14:06:30.829626 1460091 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:06:30.829835 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:30.829875 1460091 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:06:30.829891 1460091 main.go:141] libmachine: (addons-412730) Calling .GetURL
	I0630 14:06:30.831493 1460091 main.go:141] libmachine: (addons-412730) DBG | using libvirt version 6000000
	I0630 14:06:30.834168 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834575 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.834608 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834836 1460091 main.go:141] libmachine: Docker is up and running!
	I0630 14:06:30.834858 1460091 main.go:141] libmachine: Reticulating splines...
	I0630 14:06:30.834867 1460091 client.go:171] duration metric: took 24.499610068s to LocalClient.Create
	I0630 14:06:30.834910 1460091 start.go:167] duration metric: took 24.499694666s to libmachine.API.Create "addons-412730"
	I0630 14:06:30.834925 1460091 start.go:293] postStartSetup for "addons-412730" (driver="kvm2")
	I0630 14:06:30.834938 1460091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:06:30.834971 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:30.835263 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:06:30.835291 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.837701 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838027 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.838070 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838230 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.838425 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.838615 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.838765 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.930536 1460091 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:06:30.935492 1460091 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:06:30.935534 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/addons for local assets ...
	I0630 14:06:30.935631 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/files for local assets ...
	I0630 14:06:30.935674 1460091 start.go:296] duration metric: took 100.742963ms for postStartSetup
	I0630 14:06:30.935713 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:30.936417 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.939655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940194 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.940223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940486 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:30.940676 1460091 start.go:128] duration metric: took 24.626491157s to createHost
	I0630 14:06:30.940701 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.943451 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.943947 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.943979 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.944167 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.944383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.944979 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.945339 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.945363 1460091 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:06:31.062586 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751292391.035640439
	
	I0630 14:06:31.062617 1460091 fix.go:216] guest clock: 1751292391.035640439
	I0630 14:06:31.062625 1460091 fix.go:229] Guest: 2025-06-30 14:06:31.035640439 +0000 UTC Remote: 2025-06-30 14:06:30.940689328 +0000 UTC m=+24.741258527 (delta=94.951111ms)
	I0630 14:06:31.062664 1460091 fix.go:200] guest clock delta is within tolerance: 94.951111ms
	I0630 14:06:31.062669 1460091 start.go:83] releasing machines lock for "addons-412730", held for 24.748599614s
	I0630 14:06:31.062697 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.063068 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:31.066256 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.066740 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.066774 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.067022 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067620 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067907 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.068104 1460091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:06:31.068165 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.068221 1460091 ssh_runner.go:195] Run: cat /version.json
	I0630 14:06:31.068250 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.071486 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.071690 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072008 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072043 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072103 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072204 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072375 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072476 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072559 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072632 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072686 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072859 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.072867 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.159582 1460091 ssh_runner.go:195] Run: systemctl --version
	I0630 14:06:31.186817 1460091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:06:31.193553 1460091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:06:31.193649 1460091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:06:31.215105 1460091 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:06:31.215137 1460091 start.go:495] detecting cgroup driver to use...
	I0630 14:06:31.215213 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0630 14:06:31.257543 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0630 14:06:31.273400 1460091 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:06:31.273466 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:06:31.289789 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:06:31.306138 1460091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:06:31.453571 1460091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:06:31.593173 1460091 docker.go:246] disabling docker service ...
	I0630 14:06:31.593260 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:06:31.610223 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:06:31.625803 1460091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:06:31.823510 1460091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:06:31.974811 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:06:31.996098 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:06:32.020154 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0630 14:06:32.033292 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0630 14:06:32.046251 1460091 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0630 14:06:32.046373 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0630 14:06:32.059569 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.072460 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0630 14:06:32.085242 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.098259 1460091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:06:32.111503 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0630 14:06:32.124063 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0630 14:06:32.136348 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0630 14:06:32.148960 1460091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:06:32.159881 1460091 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:06:32.159967 1460091 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:06:32.176065 1460091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:06:32.188348 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:32.325076 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:32.359838 1460091 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0630 14:06:32.359979 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.366616 1460091 retry.go:31] will retry after 624.469247ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0630 14:06:32.991518 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.997598 1460091 start.go:563] Will wait 60s for crictl version
	I0630 14:06:32.997677 1460091 ssh_runner.go:195] Run: which crictl
	I0630 14:06:33.002325 1460091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:06:33.045054 1460091 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0630 14:06:33.045186 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.074290 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.134404 1460091 out.go:177] * Preparing Kubernetes v1.33.2 on containerd 1.7.23 ...
	I0630 14:06:33.198052 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:33.201668 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202138 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:33.202162 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202486 1460091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:06:33.207929 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:33.224479 1460091 kubeadm.go:875] updating cluster {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412
730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:06:33.224651 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:33.224723 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:33.262407 1460091 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:06:33.262480 1460091 ssh_runner.go:195] Run: which lz4
	I0630 14:06:33.267241 1460091 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:06:33.272514 1460091 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:06:33.272561 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (420558900 bytes)
	I0630 14:06:34.883083 1460091 containerd.go:563] duration metric: took 1.615882395s to copy over tarball
	I0630 14:06:34.883194 1460091 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:06:36.966670 1460091 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08344467s)
	I0630 14:06:36.966710 1460091 containerd.go:570] duration metric: took 2.083586834s to extract the tarball
	I0630 14:06:36.966722 1460091 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:06:37.007649 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.150742 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:37.193070 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.245622 1460091 retry.go:31] will retry after 173.895536ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-06-30T14:06:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0630 14:06:37.420139 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.464724 1460091 containerd.go:627] all images are preloaded for containerd runtime.
	I0630 14:06:37.464758 1460091 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:06:37.464771 1460091 kubeadm.go:926] updating node { 192.168.39.114 8443 v1.33.2 containerd true true} ...
	I0630 14:06:37.464919 1460091 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-412730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:06:37.465002 1460091 ssh_runner.go:195] Run: sudo crictl info
	I0630 14:06:37.511001 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:37.511034 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:37.511049 1460091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:06:37.511083 1460091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-412730 NodeName:addons-412730 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:06:37.511271 1460091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-412730"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:06:37.511357 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:06:37.525652 1460091 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:06:37.525746 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:06:37.538805 1460091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0630 14:06:37.562031 1460091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:06:37.587566 1460091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0630 14:06:37.610218 1460091 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I0630 14:06:37.615571 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:37.632131 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.779642 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:37.816746 1460091 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730 for IP: 192.168.39.114
	I0630 14:06:37.816781 1460091 certs.go:194] generating shared ca certs ...
	I0630 14:06:37.816801 1460091 certs.go:226] acquiring lock for ca certs: {Name:mk0651a034eff71720267efe75974a64ed116095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:37.816978 1460091 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key
	I0630 14:06:38.156994 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt ...
	I0630 14:06:38.157034 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt: {Name:mkd96adf4b8dd000ef155465cd7541cb4dbc54f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157267 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key ...
	I0630 14:06:38.157285 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key: {Name:mk6da24087206aaf4a1c31ab7ae44030109e489f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157410 1460091 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key
	I0630 14:06:38.393807 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt ...
	I0630 14:06:38.393842 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt: {Name:mk321b6cabce084092be365d32608954916437e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394011 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key ...
	I0630 14:06:38.394022 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key: {Name:mk82210dbfc17828b961241482db840048e12b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394107 1460091 certs.go:256] generating profile certs ...
	I0630 14:06:38.394167 1460091 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key
	I0630 14:06:38.394181 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt with IP's: []
	I0630 14:06:39.030200 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt ...
	I0630 14:06:39.030240 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: {Name:mkc9df953aca8566f0870f2298300ff89b509f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030418 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key ...
	I0630 14:06:39.030431 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key: {Name:mka533b0514825fa7b24c00fc43d73342f608e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030498 1460091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367
	I0630 14:06:39.030521 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114]
	I0630 14:06:39.110277 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 ...
	I0630 14:06:39.110319 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367: {Name:mk48ce6fc18dec0b61c5b66960071aff2a24b262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110478 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 ...
	I0630 14:06:39.110491 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367: {Name:mk75d3bfb9efccf05811ea90591687efdb3f8988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110564 1460091 certs.go:381] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt
	I0630 14:06:39.110641 1460091 certs.go:385] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key
	I0630 14:06:39.110691 1460091 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key
	I0630 14:06:39.110708 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt with IP's: []
	I0630 14:06:39.311094 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt ...
	I0630 14:06:39.311131 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt: {Name:mkc683f67a11502b5bdeac9ab79459fda8dea4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311302 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key ...
	I0630 14:06:39.311315 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key: {Name:mk896db09a1f34404a9d7ba2ae83a6472f785239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311491 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:06:39.311529 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:06:39.311552 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:06:39.311574 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem (1675 bytes)
	I0630 14:06:39.312289 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:06:39.348883 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:06:39.387215 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:06:39.418089 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0630 14:06:39.456310 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:06:39.485942 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:06:39.518368 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:06:39.550454 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 14:06:39.582512 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:06:39.617828 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:06:39.640030 1460091 ssh_runner.go:195] Run: openssl version
	I0630 14:06:39.647364 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:06:39.660898 1460091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666460 1460091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666541 1460091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.674132 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:06:39.687542 1460091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:06:39.692849 1460091 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:06:39.692930 1460091 kubeadm.go:392] StartCluster: {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:06:39.693042 1460091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0630 14:06:39.693124 1460091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:06:39.733818 1460091 cri.go:89] found id: ""
	I0630 14:06:39.733920 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:06:39.748350 1460091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:06:39.762340 1460091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:06:39.774501 1460091 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:06:39.774532 1460091 kubeadm.go:157] found existing configuration files:
	
	I0630 14:06:39.774596 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:06:39.786405 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:06:39.786474 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:06:39.798586 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:06:39.809858 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:06:39.809932 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:06:39.822150 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.833619 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:06:39.833683 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.845682 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:06:39.856947 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:06:39.857015 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:06:39.870036 1460091 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:06:39.922555 1460091 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:06:39.922624 1460091 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:06:40.045815 1460091 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:06:40.045999 1460091 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:06:40.046138 1460091 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:06:40.052549 1460091 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:06:40.071818 1460091 out.go:235]   - Generating certificates and keys ...
	I0630 14:06:40.071955 1460091 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:06:40.072042 1460091 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:06:40.453325 1460091 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:06:40.505817 1460091 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:06:41.044548 1460091 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:06:41.417521 1460091 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:06:41.739226 1460091 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:06:41.739421 1460091 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:41.843371 1460091 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:06:41.843539 1460091 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:42.399109 1460091 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:06:42.840033 1460091 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:06:43.009726 1460091 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:06:43.009824 1460091 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:06:43.506160 1460091 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:06:43.698222 1460091 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:06:43.840816 1460091 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:06:44.231431 1460091 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:06:44.461049 1460091 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:06:44.461356 1460091 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:06:44.463997 1460091 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:06:44.465945 1460091 out.go:235]   - Booting up control plane ...
	I0630 14:06:44.466073 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:06:44.466167 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:06:44.466289 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:06:44.484244 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:06:44.494126 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:06:44.494220 1460091 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:06:44.678804 1460091 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:06:44.678979 1460091 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:06:45.689158 1460091 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.011115741s
	I0630 14:06:45.693304 1460091 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:06:45.693435 1460091 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.114:8443/livez
	I0630 14:06:45.694157 1460091 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:06:45.694324 1460091 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:06:48.529853 1460091 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.836599214s
	I0630 14:06:49.645556 1460091 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.952842655s
	I0630 14:06:51.692654 1460091 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.00153129s
	I0630 14:06:51.707013 1460091 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:06:51.730537 1460091 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:06:51.769844 1460091 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:06:51.770065 1460091 kubeadm.go:310] [mark-control-plane] Marking the node addons-412730 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:06:51.785586 1460091 kubeadm.go:310] [bootstrap-token] Using token: ggslqu.tjlqizciadnjmkc4
	I0630 14:06:51.787072 1460091 out.go:235]   - Configuring RBAC rules ...
	I0630 14:06:51.787249 1460091 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:06:51.798527 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:06:51.808767 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:06:51.813113 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:06:51.818246 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:06:51.822008 1460091 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:06:52.099709 1460091 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:06:52.594117 1460091 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:06:53.099418 1460091 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:06:53.100502 1460091 kubeadm.go:310] 
	I0630 14:06:53.100601 1460091 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:06:53.100613 1460091 kubeadm.go:310] 
	I0630 14:06:53.100755 1460091 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:06:53.100795 1460091 kubeadm.go:310] 
	I0630 14:06:53.100858 1460091 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:06:53.100965 1460091 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:06:53.101053 1460091 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:06:53.101065 1460091 kubeadm.go:310] 
	I0630 14:06:53.101171 1460091 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:06:53.101191 1460091 kubeadm.go:310] 
	I0630 14:06:53.101279 1460091 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:06:53.101291 1460091 kubeadm.go:310] 
	I0630 14:06:53.101389 1460091 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:06:53.101534 1460091 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:06:53.101651 1460091 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:06:53.101664 1460091 kubeadm.go:310] 
	I0630 14:06:53.101782 1460091 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:06:53.101913 1460091 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:06:53.101931 1460091 kubeadm.go:310] 
	I0630 14:06:53.102062 1460091 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102204 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 \
	I0630 14:06:53.102237 1460091 kubeadm.go:310] 	--control-plane 
	I0630 14:06:53.102246 1460091 kubeadm.go:310] 
	I0630 14:06:53.102351 1460091 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:06:53.102362 1460091 kubeadm.go:310] 
	I0630 14:06:53.102448 1460091 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102611 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 
	I0630 14:06:53.104820 1460091 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:06:53.104859 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:53.104869 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:53.106742 1460091 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:06:53.108147 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:06:53.121105 1460091 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:06:53.146410 1460091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:06:53.146477 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.146567 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-412730 minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-412730 minikube.k8s.io/primary=true
	I0630 14:06:53.306096 1460091 ops.go:34] apiserver oom_adj: -16
	I0630 14:06:53.306244 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.806580 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.306722 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.807256 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.306344 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.807179 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.306640 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.807184 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.895027 1460091 kubeadm.go:1105] duration metric: took 3.748614141s to wait for elevateKubeSystemPrivileges
	I0630 14:06:56.895079 1460091 kubeadm.go:394] duration metric: took 17.202154504s to StartCluster
	I0630 14:06:56.895108 1460091 settings.go:142] acquiring lock: {Name:mk841f56cd7a9b39ff7ba20d8e74be5d85ec1f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895268 1460091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:56.895670 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/kubeconfig: {Name:mkaf116de3c28eb3dfd9964f3211c065b2db02a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895901 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:06:56.895932 1460091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:56.895997 1460091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:06:56.896117 1460091 addons.go:69] Setting yakd=true in profile "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:238] Setting addon yakd=true in "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:69] Setting ingress=true in profile "addons-412730"
	I0630 14:06:56.896159 1460091 addons.go:238] Setting addon ingress=true in "addons-412730"
	I0630 14:06:56.896176 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896165 1460091 addons.go:69] Setting registry=true in profile "addons-412730"
	I0630 14:06:56.896200 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896203 1460091 addons.go:238] Setting addon registry=true in "addons-412730"
	I0630 14:06:56.896203 1460091 addons.go:69] Setting inspektor-gadget=true in profile "addons-412730"
	I0630 14:06:56.896223 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.896233 1460091 addons.go:238] Setting addon inspektor-gadget=true in "addons-412730"
	I0630 14:06:56.896223 1460091 addons.go:69] Setting metrics-server=true in profile "addons-412730"
	I0630 14:06:56.896245 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896253 1460091 addons.go:238] Setting addon metrics-server=true in "addons-412730"
	I0630 14:06:56.896265 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896276 1460091 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896285 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896287 1460091 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-412730"
	I0630 14:06:56.896305 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896570 1460091 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-412730"
	I0630 14:06:56.896661 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-412730"
	I0630 14:06:56.896723 1460091 addons.go:69] Setting volcano=true in profile "addons-412730"
	I0630 14:06:56.896778 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896785 1460091 addons.go:69] Setting registry-creds=true in profile "addons-412730"
	I0630 14:06:56.896751 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896799 1460091 addons.go:69] Setting volumesnapshots=true in profile "addons-412730"
	I0630 14:06:56.896804 1460091 addons.go:238] Setting addon registry-creds=true in "addons-412730"
	I0630 14:06:56.896811 1460091 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896816 1460091 addons.go:238] Setting addon volumesnapshots=true in "addons-412730"
	I0630 14:06:56.896825 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896830 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896835 1460091 addons.go:69] Setting cloud-spanner=true in profile "addons-412730"
	I0630 14:06:56.896838 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896836 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896852 1460091 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-412730"
	I0630 14:06:56.896876 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896897 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896918 1460091 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:06:56.896941 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897097 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897165 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897187 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897280 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897295 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896826 1460091 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-412730"
	I0630 14:06:56.897181 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897361 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896845 1460091 addons.go:238] Setting addon cloud-spanner=true in "addons-412730"
	I0630 14:06:56.897199 1460091 addons.go:69] Setting storage-provisioner=true in profile "addons-412730"
	I0630 14:06:56.897456 1460091 addons.go:238] Setting addon storage-provisioner=true in "addons-412730"
	I0630 14:06:56.897488 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897499 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897606 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897876 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897886 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897898 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897978 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896791 1460091 addons.go:238] Setting addon volcano=true in "addons-412730"
	I0630 14:06:56.898062 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896771 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898162 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896767 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898520 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897212 1460091 addons.go:69] Setting default-storageclass=true in profile "addons-412730"
	I0630 14:06:56.898795 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-412730"
	I0630 14:06:56.899315 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899389 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897224 1460091 addons.go:69] Setting gcp-auth=true in profile "addons-412730"
	I0630 14:06:56.899644 1460091 mustload.go:65] Loading cluster: addons-412730
	I0630 14:06:56.897241 1460091 addons.go:69] Setting ingress-dns=true in profile "addons-412730"
	I0630 14:06:56.899700 1460091 addons.go:238] Setting addon ingress-dns=true in "addons-412730"
	I0630 14:06:56.899796 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896785 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899911 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897328 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.899604 1460091 out.go:177] * Verifying Kubernetes components...
	I0630 14:06:56.915173 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:56.925317 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0630 14:06:56.933471 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0630 14:06:56.933567 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.933596 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0630 14:06:56.934049 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934108 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934159 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934204 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934401 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934443 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.938799 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0630 14:06:56.939041 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0630 14:06:56.939193 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0630 14:06:56.939457 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0630 14:06:56.939729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940028 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940309 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.940326 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.940413 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940931 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941099 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.941112 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.941179 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.941232 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941301 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941738 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.941788 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.942491 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942515 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942624 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.942661 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942683 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942765 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.942792 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942805 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943018 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943153 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943163 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943215 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.943262 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944142 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944175 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944193 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944211 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944294 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944358 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.945770 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.945856 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.946237 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.946282 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.947082 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.947128 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.948967 1460091 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-412730"
	I0630 14:06:56.949015 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.949453 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.949501 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.962217 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.962296 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.973604 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0630 14:06:56.974149 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.974664 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.974695 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.975099 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.975299 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.975756 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0630 14:06:56.977204 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.977635 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.977698 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.977979 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.978793 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.978814 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.979233 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.979861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.979908 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.983635 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0630 14:06:56.984067 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0630 14:06:56.984613 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.985289 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.985309 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.985797 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.986422 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.986466 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.987326 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0630 14:06:56.987554 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I0630 14:06:56.988111 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.988781 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.988800 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.988868 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0630 14:06:56.989272 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989514 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989982 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990005 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990076 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990136 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990167 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990395 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990688 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.990745 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.991420 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.992366 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.992419 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.992669 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I0630 14:06:56.993907 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.995228 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.995248 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.995880 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.997265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.999293 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0630 14:06:56.999370 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.001508 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:06:57.002883 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:06:57.002916 1460091 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:06:57.002942 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.003610 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0630 14:06:57.005195 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0630 14:06:57.005935 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.005991 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.006255 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0630 14:06:57.006289 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006456 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006802 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.007205 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0630 14:06:57.007321 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0630 14:06:57.007438 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007452 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007601 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007616 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007742 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007767 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.008050 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008112 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.008285 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008675 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.008703 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.008723 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.008787 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.008808 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.009263 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009378 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.009421 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009781 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.010031 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.010108 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.010355 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010373 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010513 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010533 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010629 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.010969 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.010977 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.011283 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.011304 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.011392 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.011650 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.011783 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.011867 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012379 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.012423 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012599 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.012859 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.012877 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.013047 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.013778 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.014215 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.014495 1460091 addons.go:238] Setting addon default-storageclass=true in "addons-412730"
	I0630 14:06:57.014541 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:57.014778 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.014972 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.015012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.015647 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017091 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017305 1460091 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:06:57.017315 1460091 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:06:57.019235 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:06:57.019245 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:06:57.019258 1460091 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:06:57.019263 1460091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019356 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:06:57.020515 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45803
	I0630 14:06:57.020579 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:57.020596 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:06:57.020635 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.021372 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.021977 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.022038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.022485 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.023104 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.023180 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.023405 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.023860 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.023897 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.025612 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.025864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.025948 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0630 14:06:57.026240 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.026420 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.026868 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.028570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029396 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029420 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029587 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.029699 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.029761 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029777 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029959 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030089 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.030322 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.030383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030669 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.031123 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.031274 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.031289 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.031683 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.037907 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.038177 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.039744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0630 14:06:57.039978 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0630 14:06:57.040537 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.040729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.041308 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041328 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041600 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041615 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041928 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.042164 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.042315 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.044033 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0630 14:06:57.044725 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.045331 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.045350 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.045878 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.045938 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.046425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0630 14:06:57.047116 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.047396 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.047496 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.048257 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.048279 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.048498 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049312 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049440 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.049911 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.050622 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:06:57.050709 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.051429 1460091 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:06:57.051993 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.053508 1460091 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:57.053531 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:06:57.053554 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.054413 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42375
	I0630 14:06:57.054437 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.054478 1460091 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:06:57.054413 1460091 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:06:57.054933 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.055768 1460091 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:57.055790 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:06:57.055812 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.055852 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.055876 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.056303 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.056581 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:06:57.056594 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.056599 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:06:57.056622 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.057388 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:06:57.058752 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:57.058770 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:06:57.058788 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.059503 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060288 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.060307 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060551 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.060762 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.060918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.060980 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.061036 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.061516 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0630 14:06:57.062190 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.062207 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062733 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.062771 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.062855 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.062894 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.063152 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.063283 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.063407 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.063631 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.1
	I0630 14:06:57.063848 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.063854 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0630 14:06:57.063891 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0630 14:06:57.064349 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.064387 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064484 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064596 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064660 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064704 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.064881 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.064942 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.065098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.065315 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.065331 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.065402 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.065624 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.066156 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.066196 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.066203 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.1
	I0630 14:06:57.066852 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.066874 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.066915 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0630 14:06:57.067252 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.067449 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.067944 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.068048 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.068097 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.068228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.068613 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.068623 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.068822 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.068891 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.1
	I0630 14:06:57.069115 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.069121 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069356 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0630 14:06:57.069576 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.070270 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.070286 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.070342 1460091 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:06:57.071005 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.071129 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.071152 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.071943 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.071951 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:06:57.071970 1460091 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:06:57.071992 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.072108 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.072154 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.072685 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:06:57.072774 1460091 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.072798 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498069 bytes)
	I0630 14:06:57.072818 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.073341 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.074059 1460091 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:06:57.074063 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:57.074155 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:06:57.074179 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.075067 1460091 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:06:57.075229 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:57.075246 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:06:57.075572 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.076243 1460091 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.076303 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:06:57.076329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.078812 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43631
	I0630 14:06:57.079025 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.079130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.079652 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.080327 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.080351 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.080481 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.080507 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.080634 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.080858 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.081036 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.081055 1460091 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:06:57.081228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.081763 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.082138 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082262 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082706 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.082752 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.083020 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083040 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083087 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083100 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.083494 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.083497 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083593 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083786 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.083977 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.084112 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.084235 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.084469 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084506 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.084520 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084738 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.084918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085065 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085095 1460091 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:06:57.085067 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.085223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.085318 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.085373 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.085526 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085673 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085865 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.086430 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:57.086442 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:06:57.086455 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.087486 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0630 14:06:57.087965 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.088516 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.088545 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.089121 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.089329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.089866 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090528 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.090554 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090740 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.090964 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.091072 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.091131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.091254 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.092992 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:06:57.094599 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:06:57.095998 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:06:57.097039 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:06:57.098265 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:06:57.099547 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:06:57.100645 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:06:57.101875 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:06:57.103299 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:06:57.103321 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:06:57.103347 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.107000 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0630 14:06:57.107083 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107594 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.107627 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.107650 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107840 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.108051 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.108244 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.108441 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.108455 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.108453 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.108913 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.109191 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.111002 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.111252 1460091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:57.111268 1460091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:06:57.111288 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.114635 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115172 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.115248 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115422 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.115624 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.115796 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.115964 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	W0630 14:06:57.363795 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.363842 1460091 retry.go:31] will retry after 315.136796ms: ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	W0630 14:06:57.364018 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.364049 1460091 retry.go:31] will retry after 155.525336ms: ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.701875 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:06:57.701976 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:57.837038 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:06:57.837063 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:06:57.838628 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.843008 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:06:57.843041 1460091 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:06:57.872159 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.909976 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:57.910010 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:06:57.932688 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:06:57.932733 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:06:57.995639 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:58.066461 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:58.080857 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:06:58.080899 1460091 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:06:58.095890 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:58.137462 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:58.206306 1460091 node_ready.go:35] waiting up to 6m0s for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209015 1460091 node_ready.go:49] node "addons-412730" is "Ready"
	I0630 14:06:58.209060 1460091 node_ready.go:38] duration metric: took 2.705097ms for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209080 1460091 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:06:58.209140 1460091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:06:58.223118 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:58.377311 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:58.393265 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:58.552870 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:58.629965 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:06:58.630008 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:06:58.758806 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:06:58.758842 1460091 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:06:58.850972 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:58.851001 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:06:59.026553 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:06:59.026591 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:06:59.029024 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:06:59.029049 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:06:59.194467 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:59.225323 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:06:59.225365 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:06:59.275081 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.275114 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:06:59.277525 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:59.360873 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:06:59.360922 1460091 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:06:59.365441 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:06:59.365473 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:06:59.479182 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:06:59.479223 1460091 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:06:59.632112 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.730609 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:06:59.730651 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:06:59.924237 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:06:59.924273 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:06:59.952744 1460091 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:06:59.952779 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:07:00.295758 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:07:00.295801 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:07:00.609047 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:00.711006 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:07:01.077427 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:07:01.077478 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:07:01.488779 1460091 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.786858112s)
	I0630 14:07:01.488824 1460091 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:07:01.488851 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.650181319s)
	I0630 14:07:01.488917 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.488939 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489367 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489386 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.489398 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.489407 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489675 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489692 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.519482 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:07:01.519507 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:07:01.953943 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:07:01.953981 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:07:02.000299 1460091 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-412730" context rescaled to 1 replicas
	I0630 14:07:02.634511 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:07:02.634547 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:07:03.286523 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:07:03.286560 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:07:03.817225 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:03.817256 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:07:04.096118 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:07:04.096173 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:04.099962 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100533 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:04.100570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100887 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:04.101144 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:04.101379 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:04.101559 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:04.500309 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:05.218352 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:07:05.643348 1460091 addons.go:238] Setting addon gcp-auth=true in "addons-412730"
	I0630 14:07:05.643433 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:07:05.643934 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.643986 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.660744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0630 14:07:05.661458 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.662215 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.662238 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.662683 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.663335 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.663379 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.682214 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0630 14:07:05.683058 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.683766 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.683791 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.684301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.684542 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:07:05.686376 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:07:05.686632 1460091 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:07:05.686663 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:05.690202 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.690836 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:05.690876 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.691075 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:05.691278 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:05.691467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:05.691655 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:11.565837 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.693634263s)
	I0630 14:07:11.565899 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.565914 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.565980 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.570295044s)
	I0630 14:07:11.566027 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.499537s)
	I0630 14:07:11.566089 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.470173071s)
	I0630 14:07:11.566122 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566098 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566168 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566176 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.42868021s)
	I0630 14:07:11.566202 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566212 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566039 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566229 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566137 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566252 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566260 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566283 1460091 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (13.357116893s)
	I0630 14:07:11.566302 1460091 api_server.go:72] duration metric: took 14.670334608s to wait for apiserver process to appear ...
	I0630 14:07:11.566309 1460091 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:07:11.566329 1460091 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0630 14:07:11.566328 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (13.343175575s)
	I0630 14:07:11.566350 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566360 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566359 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (13.189016834s)
	I0630 14:07:11.566380 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566389 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566439 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566447 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566456 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566462 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566686 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566727 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566737 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566745 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566773 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566782 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566789 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566794 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566839 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566844 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.173547374s)
	I0630 14:07:11.566862 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566868 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566871 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566874 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567113 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567151 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567170 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567176 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567183 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567190 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567203 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567217 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567249 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567258 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567271 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567282 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567329 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567335 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567250 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567548 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567578 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567585 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567976 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568014 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568021 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.568825 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568856 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568865 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569293 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.016393005s)
	I0630 14:07:11.569320 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569328 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569412 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.374918327s)
	I0630 14:07:11.569425 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569431 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569478 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.291926439s)
	I0630 14:07:11.569490 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569497 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569593 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.937451446s)
	I0630 14:07:11.569615 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569624 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569735 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.960641721s)
	W0630 14:07:11.569757 1460091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569775 1460091 retry.go:31] will retry after 330.589533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569820 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.858779326s)
	I0630 14:07:11.569834 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569841 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570507 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570534 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570540 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570547 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570552 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570841 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570867 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570873 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570879 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570884 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570993 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571027 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571032 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571041 1460091 addons.go:479] Verifying addon metrics-server=true in "addons-412730"
	I0630 14:07:11.571778 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571807 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571816 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571823 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.571830 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.571917 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572331 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572343 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572353 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.572362 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.572758 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572789 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572797 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572807 1460091 addons.go:479] Verifying addon ingress=true in "addons-412730"
	I0630 14:07:11.573202 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573214 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573223 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573229 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573243 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573257 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573283 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573302 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573308 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573315 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573321 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573502 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573535 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573568 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573586 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573947 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573962 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573971 1460091 addons.go:479] Verifying addon registry=true in "addons-412730"
	I0630 14:07:11.574975 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575195 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.575240 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575258 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575424 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575449 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.574703 1460091 out.go:177] * Verifying ingress addon...
	I0630 14:07:11.574951 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.576902 1460091 out.go:177] * Verifying registry addon...
	I0630 14:07:11.577803 1460091 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-412730 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:07:11.578734 1460091 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:07:11.579547 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:07:11.618799 1460091 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0630 14:07:11.642386 1460091 api_server.go:141] control plane version: v1.33.2
	I0630 14:07:11.642428 1460091 api_server.go:131] duration metric: took 76.109211ms to wait for apiserver health ...
	I0630 14:07:11.642442 1460091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:07:11.648379 1460091 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:07:11.648411 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:11.648426 1460091 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:07:11.648448 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:11.787935 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.787961 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.788293 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.788355 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:07:11.788482 1460091 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:07:11.788776 1460091 system_pods.go:59] 17 kube-system pods found
	I0630 14:07:11.788844 1460091 system_pods.go:61] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.788873 1460091 system_pods.go:61] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.788883 1460091 system_pods.go:61] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.788891 1460091 system_pods.go:61] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.788902 1460091 system_pods.go:61] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.788912 1460091 system_pods.go:61] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.788923 1460091 system_pods.go:61] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.788933 1460091 system_pods.go:61] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.788941 1460091 system_pods.go:61] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.788951 1460091 system_pods.go:61] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.788965 1460091 system_pods.go:61] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.788979 1460091 system_pods.go:61] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.788992 1460091 system_pods.go:61] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.789005 1460091 system_pods.go:61] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.789017 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789029 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789036 1460091 system_pods.go:61] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.789049 1460091 system_pods.go:74] duration metric: took 146.59926ms to wait for pod list to return data ...
	I0630 14:07:11.789066 1460091 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:07:11.852937 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.852969 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.853375 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.853431 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.853445 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.859436 1460091 default_sa.go:45] found service account: "default"
	I0630 14:07:11.859476 1460091 default_sa.go:55] duration metric: took 70.393128ms for default service account to be created ...
	I0630 14:07:11.859487 1460091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:07:11.900655 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:11.926835 1460091 system_pods.go:86] 18 kube-system pods found
	I0630 14:07:11.926878 1460091 system_pods.go:89] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.926886 1460091 system_pods.go:89] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.926914 1460091 system_pods.go:89] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.926919 1460091 system_pods.go:89] "csi-hostpathplugin-z9jlw" [9852b523-2f8d-4c9a-85e8-7ac58ed5eebb] Pending
	I0630 14:07:11.926925 1460091 system_pods.go:89] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.926931 1460091 system_pods.go:89] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.926940 1460091 system_pods.go:89] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.926949 1460091 system_pods.go:89] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.926958 1460091 system_pods.go:89] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.926966 1460091 system_pods.go:89] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.926977 1460091 system_pods.go:89] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.926990 1460091 system_pods.go:89] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.927011 1460091 system_pods.go:89] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.927030 1460091 system_pods.go:89] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.927042 1460091 system_pods.go:89] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.927050 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927061 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927074 1460091 system_pods.go:89] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.927089 1460091 system_pods.go:126] duration metric: took 67.593682ms to wait for k8s-apps to be running ...
	I0630 14:07:11.927104 1460091 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:07:11.927169 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:07:12.193770 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.193803 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.354834 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.854466413s)
	I0630 14:07:12.354924 1460091 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.668263946s)
	I0630 14:07:12.354926 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355156 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355521 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.355577 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.355605 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:12.355625 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355646 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355981 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.356003 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.356015 1460091 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:07:12.356885 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:07:12.357715 1460091 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:07:12.359034 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:07:12.359721 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:07:12.360023 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:07:12.360041 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:07:12.406216 1460091 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:07:12.406263 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:12.559364 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:07:12.559403 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:07:12.584643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.585219 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.665811 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.665844 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:07:12.836140 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.865786 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.084231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.084272 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.365331 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.585910 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.586224 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.635029 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.734314641s)
	I0630 14:07:13.635075 1460091 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.707884059s)
	I0630 14:07:13.635092 1460091 system_svc.go:56] duration metric: took 1.707986766s WaitForService to wait for kubelet
	I0630 14:07:13.635101 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635119 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635108 1460091 kubeadm.go:578] duration metric: took 16.739135366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:07:13.635141 1460091 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:07:13.635462 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635484 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.635497 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635507 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635808 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635828 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.638761 1460091 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:07:13.638792 1460091 node_conditions.go:123] node cpu capacity is 2
	I0630 14:07:13.638809 1460091 node_conditions.go:105] duration metric: took 3.661934ms to run NodePressure ...
	I0630 14:07:13.638826 1460091 start.go:241] waiting for startup goroutines ...
	I0630 14:07:13.875752 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.024111 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.187911729s)
	I0630 14:07:14.024195 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024227 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024586 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.024683 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.024691 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.024702 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024712 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024994 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.025013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.025043 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.026382 1460091 addons.go:479] Verifying addon gcp-auth=true in "addons-412730"
	I0630 14:07:14.029054 1460091 out.go:177] * Verifying gcp-auth addon...
	I0630 14:07:14.031483 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:07:14.064027 1460091 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:07:14.064055 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.100781 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.114141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.365832 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.534739 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.583821 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.584016 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.864558 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.035462 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.083316 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.083872 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.363154 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.536843 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.584338 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.585465 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.864842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.084017 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.084651 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.497202 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.537408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.584546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.587004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.863546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.082833 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.083256 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.367136 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.536257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.583638 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.584977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.896589 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.083625 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.084228 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.363753 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.535354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.583096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.583122 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.955635 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.083049 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.083420 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.364160 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.536108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.582458 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.583611 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.862653 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.034233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.082846 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.083682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.364310 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.535698 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.583894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.583979 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.863445 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.036429 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.084981 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.085104 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.363349 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.706174 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.707208 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.707678 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.865772 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.035893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.083199 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.084016 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.364233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.535367 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.583354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.583535 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.865792 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.035789 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.136995 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.137134 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.363626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.535937 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.582498 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.583466 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.864738 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.034476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.083541 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.084048 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.364616 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.536239 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.583008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.583026 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.864935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.082940 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.083056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.363774 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.534897 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.583749 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.583954 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.863865 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.034706 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.084015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:26.084175 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.363040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.536862 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.583797 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.583943 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.189951 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.190109 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.190223 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.191199 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.366231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.535516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.584025 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.584989 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.864198 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.037431 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.082788 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.083975 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.363252 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.535710 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.583888 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.584004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.864040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.034895 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.082915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.083605 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.363381 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.535032 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.582676 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.583815 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.865439 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.036869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.084069 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.084108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.364800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.535912 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.583840 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.585080 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.864767 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.044830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.084386 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.084487 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.364893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.623955 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.624096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.625461 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.863871 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.035869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.085127 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.086207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.373662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.539255 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.587456 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.588975 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.863384 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.037175 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.083368 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.086594 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.535971 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.582220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.583079 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.864086 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.035104 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.087614 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.090507 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.364243 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.535472 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.582842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.583065 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.864351 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.038245 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.083459 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.083968 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.364140 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.535203 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.583507 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.583504 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.864421 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.035870 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.082290 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.083322 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.363896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.536935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.592002 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.592024 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.867249 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.035497 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.082561 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.083545 1460091 kapi.go:107] duration metric: took 25.503987228s to wait for kubernetes.io/minikube-addons=registry ...
	I0630 14:07:37.364896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.535915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.582416 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.863882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.035195 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.084077 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.363908 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.536012 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.582871 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.865977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.036008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.083221 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.366301 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.537043 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.584445 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.864115 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.035178 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.082503 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.364953 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.539118 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.582790 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.920318 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.039974 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.140897 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:41.363490 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.536671 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.584110 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151839 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.151893 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151941 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.364151 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.535860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.637454 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.869058 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.034755 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.083141 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.365516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.539831 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.585574 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.867882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.035437 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.083399 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.364009 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.534997 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.582616 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.865028 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.034987 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.083033 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.363797 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.536061 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.582192 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.863930 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.035610 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.082940 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.363183 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.536317 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.582800 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.863634 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.035461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.082263 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.364204 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.537008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.638719 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.867382 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.035628 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.082998 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.363676 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.535845 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.583373 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.865933 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.035994 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.082615 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.364741 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.763038 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.763188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.864019 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.034923 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.081789 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:50.363509 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.536302 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.582756 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.084972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.085222 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.088586 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.365037 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.536393 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.583205 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.863948 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.036793 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.083280 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.363764 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.534903 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.582225 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.863489 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.035662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.083237 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.535229 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.582794 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.864519 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.035606 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.083006 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.363649 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.534894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.582432 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.874053 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.036295 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.138176 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.439408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.536289 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.583387 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.877077 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.038681 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.088650 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.364716 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.537099 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.638302 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.888274 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.065461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.082558 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.364271 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.537383 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.584203 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.864829 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.035093 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.082842 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.368712 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.536145 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.583188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.864081 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.035171 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.082395 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.363881 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.770427 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.775289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.886727 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.036389 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.138257 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.365066 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.543394 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.587828 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.862860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.045510 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.084722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.370626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.543476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.643717 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.863100 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.036395 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.083306 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.364022 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.536447 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.582849 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.863402 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.043769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.084338 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.364984 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.537068 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.583105 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.873833 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.064570 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.165207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.363705 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.534655 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.582773 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.865214 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.040132 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.082101 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.364071 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.535996 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.583847 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.864830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.035167 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.082727 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.364040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.536325 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.584424 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.867769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:07.035374 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.085873 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:07.363748 1460091 kapi.go:107] duration metric: took 55.004020875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0630 14:08:07.535663 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.583300 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.036340 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.083025 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.537501 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.583289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.035787 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.083288 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.536861 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.895410 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.036972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.103056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.537875 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.583172 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.036116 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.082706 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.537110 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.583096 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.035141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.083220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.535683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.583269 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.035346 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.085856 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.535419 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.584214 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.086182 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.538450 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.584164 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.035469 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.082710 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.535978 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.584976 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.035643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.083354 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.536216 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.582722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.036015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.082827 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.535105 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.582197 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.036044 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.082594 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.535731 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.636867 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.040011 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.084634 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.538800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.584691 1460091 kapi.go:107] duration metric: took 1m8.005950872s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:08:20.046904 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:20.544735 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.045744 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.545748 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.039630 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.538370 1460091 kapi.go:107] duration metric: took 1m8.506886725s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:08:22.539980 1460091 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-412730 cluster.
	I0630 14:08:22.541245 1460091 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:08:22.542490 1460091 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:08:22.544085 1460091 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, volcano, inspektor-gadget, registry-creds, cloud-spanner, metrics-server, ingress-dns, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0630 14:08:22.545451 1460091 addons.go:514] duration metric: took 1m25.649456906s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin volcano inspektor-gadget registry-creds cloud-spanner metrics-server ingress-dns storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0630 14:08:22.545505 1460091 start.go:246] waiting for cluster config update ...
	I0630 14:08:22.545527 1460091 start.go:255] writing updated cluster config ...
	I0630 14:08:22.545830 1460091 ssh_runner.go:195] Run: rm -f paused
	I0630 14:08:22.552874 1460091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:22.645593 1460091 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.650587 1460091 pod_ready.go:94] pod "coredns-674b8bbfcf-55nn4" is "Ready"
	I0630 14:08:22.650616 1460091 pod_ready.go:86] duration metric: took 4.992795ms for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.653714 1460091 pod_ready.go:83] waiting for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.658042 1460091 pod_ready.go:94] pod "etcd-addons-412730" is "Ready"
	I0630 14:08:22.658066 1460091 pod_ready.go:86] duration metric: took 4.323836ms for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.660310 1460091 pod_ready.go:83] waiting for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.664410 1460091 pod_ready.go:94] pod "kube-apiserver-addons-412730" is "Ready"
	I0630 14:08:22.664433 1460091 pod_ready.go:86] duration metric: took 4.099276ms for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.666354 1460091 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.958219 1460091 pod_ready.go:94] pod "kube-controller-manager-addons-412730" is "Ready"
	I0630 14:08:22.958253 1460091 pod_ready.go:86] duration metric: took 291.880924ms for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.158459 1460091 pod_ready.go:83] waiting for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.557555 1460091 pod_ready.go:94] pod "kube-proxy-mgntr" is "Ready"
	I0630 14:08:23.557587 1460091 pod_ready.go:86] duration metric: took 399.092549ms for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.758293 1460091 pod_ready.go:83] waiting for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157033 1460091 pod_ready.go:94] pod "kube-scheduler-addons-412730" is "Ready"
	I0630 14:08:24.157070 1460091 pod_ready.go:86] duration metric: took 398.746217ms for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157088 1460091 pod_ready.go:40] duration metric: took 1.604151264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:24.206500 1460091 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:08:24.208969 1460091 out.go:177] * Done! kubectl is now configured to use "addons-412730" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a0b36f35dec94       4e1d3ecf2ae81       29 seconds ago      Exited              gadget                                   6                   a9375e17c0bdc       gadget-xjkv5
	83d59b6e956ec       09430b9a8a1c6       48 seconds ago      Running             volcano-controllers                      0                   8ea9960a5b452       volcano-controllers-7b774bbd55-5gzgs
	c553543b6c96d       926061c8f6ec3       59 seconds ago      Exited              cloud-spanner-emulator                   6                   70915f0510b8c       cloud-spanner-emulator-6d967984f9-gqgvc
	99952e09184df       7a12f2aed60be       6 minutes ago       Running             gcp-auth                                 0                   1420b3eb59860       gcp-auth-cd9db85c-dj66z
	f523949ea8096       0e02c2116c89b       6 minutes ago       Running             admission                                0                   8c549e502ff8d       volcano-admission-55859c8887-pfpvb
	a41e1f5d78ba3       158e2f2d90f21       6 minutes ago       Running             controller                               0                   ad79beda1cd96       ingress-nginx-controller-67687b59dd-vvcrv
	0383a04db64b6       738351fd438f0       6 minutes ago       Running             csi-snapshotter                          0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	b2d34cd3b4b5f       931dbfd16f87c       6 minutes ago       Running             csi-provisioner                          0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	7083636dce9aa       e899260153aed       6 minutes ago       Running             liveness-probe                           0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	bfebc08e181a7       e255e073c508c       6 minutes ago       Running             hostpath                                 0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	49bfc828f9828       88ef14a257f42       6 minutes ago       Running             node-driver-registrar                    0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	02d5183cb541e       19a639eda60f0       6 minutes ago       Running             csi-resizer                              0                   1b37be17df7f2       csi-hostpath-resizer-0
	40b28663fd84f       a1ed5895ba635       6 minutes ago       Running             csi-external-health-monitor-controller   0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	b66ddaac6e88a       59cbb42146a37       6 minutes ago       Running             csi-attacher                             0                   6f9489fdc4235       csi-hostpath-attacher-0
	26fe9ebfb74f8       0e02c2116c89b       6 minutes ago       Exited              main                                     0                   9599ab115d204       volcano-admission-init-lsxww
	2c3efa502f6ac       0ea86a0862033       6 minutes ago       Exited              patch                                    0                   479724e3cf758       ingress-nginx-admission-patch-fl6cb
	dca6ca157e955       aa61ee9c70bc4       6 minutes ago       Running             volume-snapshot-controller               0                   82ccf34d900ac       snapshot-controller-68b874b76f-v6vkl
	8ff6da260516f       0ea86a0862033       6 minutes ago       Exited              create                                   0                   104d25c1177d7       ingress-nginx-admission-create-gpszb
	b61ad9d665eb6       aa61ee9c70bc4       6 minutes ago       Running             volume-snapshot-controller               0                   9aa1ac650c210       snapshot-controller-68b874b76f-pn4tl
	570af6801674e       c7e3a3eeaf5ed       6 minutes ago       Running             yakd                                     0                   e17aea4a11170       yakd-dashboard-575dd5996b-7594f
	9d1dce2bd3c5f       e16d1e3a10667       6 minutes ago       Running             local-path-provisioner                   0                   115dda0086b6d       local-path-provisioner-76f89f99b5-rnqpb
	1e00c89f4e403       48d9cfaaf3904       6 minutes ago       Running             metrics-server                           0                   4963f9e68ccfd       metrics-server-7fbb699795-kjqlg
	c5f1b5a17deba       b1c9f9ef5f0c2       6 minutes ago       Running             registry-proxy                           0                   0307437ef0916       registry-proxy-dzp7x
	32aa49c8026c6       3dec7d02aaeab       6 minutes ago       Running             registry                                 0                   4671a41226c15       registry-694bd45846-xjdfn
	2618e4dc11783       30dd67412fdea       6 minutes ago       Running             minikube-ingress-dns                     0                   0fd95f2b44624       kube-ingress-dns-minikube
	811184505fb18       d5e667c0f2bb6       7 minutes ago       Running             amd-gpu-device-plugin                    0                   b44acdeabc7e9       amd-gpu-device-plugin-jk4pf
	977ca93019349       71f4541d753b0       7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   c066b0fa89082       nvidia-device-plugin-daemonset-x5r2c
	60e507365f1d3       6e38f40d628db       7 minutes ago       Running             storage-provisioner                      0                   c81c97cad8c5e       storage-provisioner
	8e1e019f61b20       1cf5f116067c6       7 minutes ago       Running             coredns                                  0                   f0e3a5c4dc1ba       coredns-674b8bbfcf-55nn4
	e9d272ef95cc8       661d404f36f01       7 minutes ago       Running             kube-proxy                               0                   ec083bc9ceaf6       kube-proxy-mgntr
	cda40c61e5780       cfed1ff748928       7 minutes ago       Running             kube-scheduler                           0                   8b62447a9ffbc       kube-scheduler-addons-412730
	0f5bd8617276d       ee794efa53d85       7 minutes ago       Running             kube-apiserver                           0                   296d470d26007       kube-apiserver-addons-412730
	ed722ba732c02       ff4f56c76b82d       7 minutes ago       Running             kube-controller-manager                  0                   6de0b1c4abb94       kube-controller-manager-addons-412730
	0aa8fdef51063       499038711c081       7 minutes ago       Running             etcd                                     0                   2ea511d5408a9       etcd-addons-412730
	
	
	==> containerd <==
	Jun 30 14:13:37 addons-412730 containerd[860]: time="2025-06-30T14:13:37.337072450Z" level=info msg="Pulled image \"docker.io/volcanosh/vc-controller-manager:v1.12.1@sha256:3815883c32f62c3a60b8208ba834f304d91d8f05cddfabd440aa15f7f8bef296\" with image id \"sha256:09430b9a8a1c68213024f5dcd421d0d50b61152684167d3af8ec266630200a1b\", repo tag \"\", repo digest \"docker.io/volcanosh/vc-controller-manager@sha256:3815883c32f62c3a60b8208ba834f304d91d8f05cddfabd440aa15f7f8bef296\", size \"38670448\" in 1.893190558s"
	Jun 30 14:13:37 addons-412730 containerd[860]: time="2025-06-30T14:13:37.337129322Z" level=info msg="PullImage \"docker.io/volcanosh/vc-controller-manager:v1.12.1@sha256:3815883c32f62c3a60b8208ba834f304d91d8f05cddfabd440aa15f7f8bef296\" returns image reference \"sha256:09430b9a8a1c68213024f5dcd421d0d50b61152684167d3af8ec266630200a1b\""
	Jun 30 14:13:37 addons-412730 containerd[860]: time="2025-06-30T14:13:37.345062453Z" level=info msg="CreateContainer within sandbox \"8ea9960a5b4523eb50deb21654f97434bf974e34062dc57c71721689eb52480f\" for container &ContainerMetadata{Name:volcano-controllers,Attempt:0,}"
	Jun 30 14:13:37 addons-412730 containerd[860]: time="2025-06-30T14:13:37.367373682Z" level=info msg="CreateContainer within sandbox \"8ea9960a5b4523eb50deb21654f97434bf974e34062dc57c71721689eb52480f\" for &ContainerMetadata{Name:volcano-controllers,Attempt:0,} returns container id \"83d59b6e956ec66b89120691c1ef74b63d4fc58d1236299c56fc415017bf3ff5\""
	Jun 30 14:13:37 addons-412730 containerd[860]: time="2025-06-30T14:13:37.368115281Z" level=info msg="StartContainer for \"83d59b6e956ec66b89120691c1ef74b63d4fc58d1236299c56fc415017bf3ff5\""
	Jun 30 14:13:37 addons-412730 containerd[860]: time="2025-06-30T14:13:37.460760278Z" level=info msg="StartContainer for \"83d59b6e956ec66b89120691c1ef74b63d4fc58d1236299c56fc415017bf3ff5\" returns successfully"
	Jun 30 14:13:45 addons-412730 containerd[860]: time="2025-06-30T14:13:45.443359894Z" level=info msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\""
	Jun 30 14:13:45 addons-412730 containerd[860]: time="2025-06-30T14:13:45.446409384Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:13:45 addons-412730 containerd[860]: time="2025-06-30T14:13:45.521250703Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:13:45 addons-412730 containerd[860]: time="2025-06-30T14:13:45.631083631Z" level=error msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:13:45 addons-412730 containerd[860]: time="2025-06-30T14:13:45.631286237Z" level=info msg="stop pulling image docker.io/volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2: active requests=0, bytes read=11015"
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.446368227Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469\""
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.586570821Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.588315829Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469: active requests=0, bytes read=89"
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.592193377Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469\" with image id \"sha256:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469\", size \"78190156\" in 145.777736ms"
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.592338364Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0@sha256:1ba1900f625d235ee85737a948b363f620b2494f0963eb06c39898f37e470469\" returns image reference \"sha256:4e1d3ecf2ae81d58a56fdee0b75796f78ffac8c66ae36e1f4554bf5966ba738a\""
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.598140087Z" level=info msg="CreateContainer within sandbox \"a9375e17c0bdc7f0cf74e0ba48f5c50a366d7c7c34d8b73e8ab1136b593adff5\" for container &ContainerMetadata{Name:gadget,Attempt:6,}"
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.621157560Z" level=info msg="CreateContainer within sandbox \"a9375e17c0bdc7f0cf74e0ba48f5c50a366d7c7c34d8b73e8ab1136b593adff5\" for &ContainerMetadata{Name:gadget,Attempt:6,} returns container id \"a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd\""
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.622874906Z" level=info msg="StartContainer for \"a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd\""
	Jun 30 14:13:56 addons-412730 containerd[860]: time="2025-06-30T14:13:56.695065211Z" level=info msg="StartContainer for \"a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd\" returns successfully"
	Jun 30 14:13:58 addons-412730 containerd[860]: time="2025-06-30T14:13:58.195858001Z" level=info msg="shim disconnected" id=a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd namespace=k8s.io
	Jun 30 14:13:58 addons-412730 containerd[860]: time="2025-06-30T14:13:58.196357847Z" level=warning msg="cleaning up after shim disconnected" id=a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd namespace=k8s.io
	Jun 30 14:13:58 addons-412730 containerd[860]: time="2025-06-30T14:13:58.196558913Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jun 30 14:13:58 addons-412730 containerd[860]: time="2025-06-30T14:13:58.361000341Z" level=info msg="RemoveContainer for \"bb7b02bbc5c8dd5b9894572995903031fc567d38b2a12418a019ff0ebbf7d45e\""
	Jun 30 14:13:58 addons-412730 containerd[860]: time="2025-06-30T14:13:58.367925254Z" level=info msg="RemoveContainer for \"bb7b02bbc5c8dd5b9894572995903031fc567d38b2a12418a019ff0ebbf7d45e\" returns successfully"
	
	
	==> coredns [8e1e019f61b2004e8815ddbaf9eb6f733467fc8a79bd77196bc0c76b85b8b99c] <==
	[INFO] Reloading complete
	[INFO] 10.244.0.7:37816 - 273 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000500376s
	[INFO] 10.244.0.7:37816 - 48483 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00020548s
	[INFO] 10.244.0.7:37816 - 18283 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000160064s
	[INFO] 10.244.0.7:37816 - 57759 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000505163s
	[INFO] 10.244.0.7:37816 - 2367 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000121216s
	[INFO] 10.244.0.7:37816 - 32941 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000407687s
	[INFO] 10.244.0.7:37816 - 38124 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00021235s
	[INFO] 10.244.0.7:37816 - 42370 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000448784s
	[INFO] 10.244.0.7:49788 - 53103 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191609s
	[INFO] 10.244.0.7:49788 - 52743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161724s
	[INFO] 10.244.0.7:59007 - 35302 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000389724s
	[INFO] 10.244.0.7:59007 - 35035 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000520532s
	[INFO] 10.244.0.7:46728 - 65447 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133644s
	[INFO] 10.244.0.7:46728 - 65148 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00061652s
	[INFO] 10.244.0.7:50533 - 14727 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000567642s
	[INFO] 10.244.0.7:50533 - 14481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000783618s
	[INFO] 10.244.0.27:51053 - 48711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000523898s
	[INFO] 10.244.0.27:40917 - 60785 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000642215s
	[INFO] 10.244.0.27:35189 - 63805 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096026s
	[INFO] 10.244.0.27:43478 - 6990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00040325s
	[INFO] 10.244.0.27:53994 - 15788 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170635s
	[INFO] 10.244.0.27:51155 - 39553 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128149s
	[INFO] 10.244.0.27:37346 - 35756 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001274741s
	[INFO] 10.244.0.27:38294 - 56651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000805113s
	
	
	==> describe nodes <==
	Name:               addons-412730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-412730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-412730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-412730
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-412730"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:06:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-412730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:14:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:14:01 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:14:01 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:14:01 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:14:01 +0000   Mon, 30 Jun 2025 14:06:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    addons-412730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc9448cb8b5448fc9151301fb29bc0cd
	  System UUID:                bc9448cb-8b54-48fc-9151-301fb29bc0cd
	  Boot ID:                    6141a1b2-f9ea-4f8f-bc9e-ef270348f968
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6d967984f9-gqgvc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m25s
	  gadget                      gadget-xjkv5                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  gcp-auth                    gcp-auth-cd9db85c-dj66z                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  ingress-nginx               ingress-nginx-controller-67687b59dd-vvcrv    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         7m18s
	  kube-system                 amd-gpu-device-plugin-jk4pf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 coredns-674b8bbfcf-55nn4                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m29s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 csi-hostpathplugin-z9jlw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 etcd-addons-412730                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m34s
	  kube-system                 kube-apiserver-addons-412730                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 kube-controller-manager-addons-412730        200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-proxy-mgntr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  kube-system                 kube-scheduler-addons-412730                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 metrics-server-7fbb699795-kjqlg              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m22s
	  kube-system                 nvidia-device-plugin-daemonset-x5r2c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 registry-694bd45846-xjdfn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 registry-creds-6b69cdcdd5-kxnxr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 registry-proxy-dzp7x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 snapshot-controller-68b874b76f-pn4tl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 snapshot-controller-68b874b76f-v6vkl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  local-path-storage          local-path-provisioner-76f89f99b5-rnqpb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  volcano-system              volcano-admission-55859c8887-pfpvb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  volcano-system              volcano-controllers-7b774bbd55-5gzgs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  volcano-system              volcano-scheduler-854568c9bb-jfhvt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  yakd-dashboard              yakd-dashboard-575dd5996b-7594f              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m41s (x8 over 7m41s)  kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s (x8 over 7m41s)  kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s (x7 over 7m41s)  kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m34s                  kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s                  kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s                  kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m33s                  kubelet          Node addons-412730 status is now: NodeReady
	  Normal  RegisteredNode           7m30s                  node-controller  Node addons-412730 event: Registered Node addons-412730 in Controller
	
	
	==> dmesg <==
	[  +1.154263] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092295] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.114831] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.095192] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.150331] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.466879] kauditd_printk_skb: 19 callbacks suppressed
	[Jun30 14:07] kauditd_printk_skb: 128 callbacks suppressed
	[  +1.924750] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.207196] kauditd_printk_skb: 112 callbacks suppressed
	[  +7.400861] kauditd_printk_skb: 40 callbacks suppressed
	[  +4.862777] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.721987] kauditd_printk_skb: 3 callbacks suppressed
	[  +3.179109] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.932449] kauditd_printk_skb: 47 callbacks suppressed
	[  +4.007047] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.735579] kauditd_printk_skb: 26 callbacks suppressed
	[Jun30 14:08] kauditd_printk_skb: 76 callbacks suppressed
	[  +4.704545] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.836614] kauditd_printk_skb: 61 callbacks suppressed
	[Jun30 14:09] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:10] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:13] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [0aa8fdef5106381a33bf7fae10904caa793ace481cae1d43127914ffe86d49ff] <==
	{"level":"info","ts":"2025-06-30T14:07:49.750953Z","caller":"traceutil/trace.go:171","msg":"trace[165552306] linearizableReadLoop","detail":"{readStateIndex:1234; appliedIndex:1233; }","duration":"221.743765ms","start":"2025-06-30T14:07:49.529194Z","end":"2025-06-30T14:07:49.750937Z","steps":["trace[165552306] 'read index received'  (duration: 221.210977ms)","trace[165552306] 'applied index is now lower than readState.Index'  (duration: 531.675µs)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T14:07:49.751318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.024981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:49.751386Z","caller":"traceutil/trace.go:171","msg":"trace[188957893] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1203; }","duration":"222.190711ms","start":"2025-06-30T14:07:49.529188Z","end":"2025-06-30T14:07:49.751379Z","steps":["trace[188957893] 'agreement among raft nodes before linearized reading'  (duration: 221.958087ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:49.751637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.210142ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:49.751838Z","caller":"traceutil/trace.go:171","msg":"trace[1184992035] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1203; }","duration":"187.410383ms","start":"2025-06-30T14:07:49.564417Z","end":"2025-06-30T14:07:49.751827Z","steps":["trace[1184992035] 'agreement among raft nodes before linearized reading'  (duration: 187.200791ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:49.752758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.403506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:49.751590Z","caller":"traceutil/trace.go:171","msg":"trace[559772973] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"267.154952ms","start":"2025-06-30T14:07:49.483661Z","end":"2025-06-30T14:07:49.750816Z","steps":["trace[559772973] 'process raft request'  (duration: 266.932951ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:07:49.752866Z","caller":"traceutil/trace.go:171","msg":"trace[154741241] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1203; }","duration":"176.571713ms","start":"2025-06-30T14:07:49.576287Z","end":"2025-06-30T14:07:49.752858Z","steps":["trace[154741241] 'agreement among raft nodes before linearized reading'  (duration: 176.438082ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.060101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.201972ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156627244712664246 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" value_size:707 lease:3156627244712664233 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-06-30T14:07:51.060508Z","caller":"traceutil/trace.go:171","msg":"trace[1403560008] linearizableReadLoop","detail":"{readStateIndex:1246; appliedIndex:1245; }","duration":"269.602891ms","start":"2025-06-30T14:07:50.790891Z","end":"2025-06-30T14:07:51.060494Z","steps":["trace[1403560008] 'read index received'  (duration: 53.900301ms)","trace[1403560008] 'applied index is now lower than readState.Index'  (duration: 215.701517ms)"],"step_count":2}
	{"level":"info","ts":"2025-06-30T14:07:51.060687Z","caller":"traceutil/trace.go:171","msg":"trace[1928328932] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"282.940847ms","start":"2025-06-30T14:07:50.777737Z","end":"2025-06-30T14:07:51.060678Z","steps":["trace[1928328932] 'process raft request'  (duration: 67.101901ms)","trace[1928328932] 'compare'  (duration: 214.876695ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T14:07:51.060917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.674634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.060970Z","caller":"traceutil/trace.go:171","msg":"trace[1908369901] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1214; }","duration":"254.762861ms","start":"2025-06-30T14:07:50.806198Z","end":"2025-06-30T14:07:51.060961Z","steps":["trace[1908369901] 'agreement among raft nodes before linearized reading'  (duration: 254.494296ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.462832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb\" limit:1 ","response":"range_response_count:1 size:4215"}
	{"level":"info","ts":"2025-06-30T14:07:51.061377Z","caller":"traceutil/trace.go:171","msg":"trace[1518962383] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb; range_end:; response_count:1; response_revision:1214; }","duration":"270.575777ms","start":"2025-06-30T14:07:50.790792Z","end":"2025-06-30T14:07:51.061368Z","steps":["trace[1518962383] 'agreement among raft nodes before linearized reading'  (duration: 270.487611ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.960425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.062418Z","caller":"traceutil/trace.go:171","msg":"trace[621823114] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1214; }","duration":"205.559852ms","start":"2025-06-30T14:07:50.856769Z","end":"2025-06-30T14:07:51.062329Z","steps":["trace[621823114] 'agreement among raft nodes before linearized reading'  (duration: 204.992694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:55.431218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.529916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:55.431286Z","caller":"traceutil/trace.go:171","msg":"trace[1840291804] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1254; }","duration":"185.638229ms","start":"2025-06-30T14:07:55.245637Z","end":"2025-06-30T14:07:55.431275Z","steps":["trace[1840291804] 'count revisions from in-memory index tree'  (duration: 185.483282ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.760814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.563816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.761810Z","caller":"traceutil/trace.go:171","msg":"trace[1037456471] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"232.616347ms","start":"2025-06-30T14:07:59.529177Z","end":"2025-06-30T14:07:59.761793Z","steps":["trace[1037456471] 'range keys from in-memory index tree'  (duration: 231.18055ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.762324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.982539ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.762383Z","caller":"traceutil/trace.go:171","msg":"trace[856262130] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1289; }","duration":"197.052432ms","start":"2025-06-30T14:07:59.565321Z","end":"2025-06-30T14:07:59.762373Z","steps":["trace[856262130] 'range keys from in-memory index tree'  (duration: 196.924905ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.767749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.524873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.767792Z","caller":"traceutil/trace.go:171","msg":"trace[2033650698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"189.645425ms","start":"2025-06-30T14:07:59.578136Z","end":"2025-06-30T14:07:59.767782Z","steps":["trace[2033650698] 'range keys from in-memory index tree'  (duration: 183.005147ms)"],"step_count":1}
	
	
	==> gcp-auth [99952e09184df23a262c90fb3f77a6cc3e1a9e0c61b2719ed4352f2e40d96588] <==
	2025/06/30 14:08:22 GCP Auth Webhook started!
	
	
	==> kernel <==
	 14:14:26 up 8 min,  0 users,  load average: 0.30, 0.66, 0.49
	Linux addons-412730 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0f5bd8617276d56b4d1c938db3290f5057a6076ca2a1ff6b72007428d9958a0f] <==
	I0630 14:07:10.230006       1 handler.go:288] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0630 14:07:10.273065       1 handler.go:288] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0630 14:07:10.640157       1 alloc.go:328] "allocated clusterIPs" service="volcano-system/volcano-scheduler-service" clusterIPs={"IPv4":"10.106.106.49"}
	I0630 14:07:10.983352       1 handler.go:288] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0630 14:07:11.018765       1 handler.go:288] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0630 14:07:11.050504       1 handler.go:288] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0630 14:07:11.109916       1 handler.go:288] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I0630 14:07:11.337911       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.67.73"}
	I0630 14:07:11.348316       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:07:11.396911       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I0630 14:07:12.073104       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.110.176"}
	I0630 14:07:12.089566       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:07:12.250258       1 handler.go:288] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0630 14:07:13.775763       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:07:13.779874       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.111.131.230"}
	W0630 14:07:40.975874       1 handler_proxy.go:99] no RequestInfo found in the context
	E0630 14:07:40.977507       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0630 14:07:40.978744       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.104.106:443: connect: connection refused" logger="UnhandledError"
	E0630 14:07:40.989835       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.104.106:443: connect: connection refused" logger="UnhandledError"
	E0630 14:07:40.997179       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.104.106:443: connect: connection refused" logger="UnhandledError"
	E0630 14:07:41.007389       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.104.106:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.104.106:443: connect: connection refused" logger="UnhandledError"
	I0630 14:07:41.125681       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [ed722ba732c0211e772331fd643a8e48e5ef0b8cd4b82f97d3a5d69b9aa30756] <==
	I0630 14:06:56.424516       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0630 14:06:56.452090       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 14:06:56.471017       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 14:06:56.471241       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0630 14:06:56.521650       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 14:06:56.527415       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 14:06:56.539163       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:06:56.582721       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:06:57.015609       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:06:57.073838       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:06:57.073948       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:06:57.073961       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0630 14:07:26.590853       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0630 14:07:26.592558       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I0630 14:07:26.592809       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I0630 14:07:26.593107       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I0630 14:07:26.593357       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I0630 14:07:26.593536       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="traces.gadget.kinvolk.io"
	I0630 14:07:26.593685       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I0630 14:07:26.593862       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0630 14:07:26.594073       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0630 14:07:27.024055       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0630 14:07:27.029196       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0630 14:07:27.195349       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:07:27.230670       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e9d272ef95cc8f73e12d5cc59f4966731013d924126fc8eb0bd96e6acc623f27] <==
	E0630 14:06:58.349607       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:06:58.396678       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E0630 14:06:58.396782       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:06:58.682235       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:06:58.682289       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:06:58.682317       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:06:58.729336       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:06:58.729702       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:06:58.729714       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:06:58.747265       1 config.go:199] "Starting service config controller"
	I0630 14:06:58.747303       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:06:58.747324       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:06:58.747328       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:06:58.747339       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:06:58.747342       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:06:58.747357       1 config.go:329] "Starting node config controller"
	I0630 14:06:58.747360       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:06:58.847644       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 14:06:58.847708       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:06:58.847734       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:06:58.848003       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cda40c61e5780477d5a234f04d425f2347a784973443632c68938aea16f474e6] <==
	E0630 14:06:49.633867       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:06:49.633920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:06:49.634247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:49.636896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:49.637563       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:49.637783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:06:49.638039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:49.638190       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:49.638365       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:06:49.638496       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:06:49.638609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:06:49.638719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:49.638999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.551259       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.618504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:50.628999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 14:06:50.679571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:50.702747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:06:50.708224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:50.796622       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:06:50.797647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:50.806980       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:50.808489       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:50.967143       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0630 14:06:53.415169       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:13:45 addons-412730 kubelet[1571]: E0630 14:13:45.633330    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-854568c9bb-jfhvt" podUID="e37a78c0-cf90-49a3-bdb1-32ceb4f43f52"
	Jun 30 14:13:46 addons-412730 kubelet[1571]: I0630 14:13:46.445528    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dzp7x" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:13:54 addons-412730 kubelet[1571]: I0630 14:13:54.443051    1571 scope.go:117] "RemoveContainer" containerID="c553543b6c96d9c77554dc881cc9992dbe932b71bb7fcf925be75a4b2ebbda3d"
	Jun 30 14:13:54 addons-412730 kubelet[1571]: E0630 14:13:54.443858    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloud-spanner-emulator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cloud-spanner-emulator pod=cloud-spanner-emulator-6d967984f9-gqgvc_default(0920ab8a-8a65-4046-bebe-4d3e25cc6f9a)\"" pod="default/cloud-spanner-emulator-6d967984f9-gqgvc" podUID="0920ab8a-8a65-4046-bebe-4d3e25cc6f9a"
	Jun 30 14:13:56 addons-412730 kubelet[1571]: I0630 14:13:56.443781    1571 scope.go:117] "RemoveContainer" containerID="bb7b02bbc5c8dd5b9894572995903031fc567d38b2a12418a019ff0ebbf7d45e"
	Jun 30 14:13:57 addons-412730 kubelet[1571]: E0630 14:13:57.443392    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-854568c9bb-jfhvt" podUID="e37a78c0-cf90-49a3-bdb1-32ceb4f43f52"
	Jun 30 14:13:58 addons-412730 kubelet[1571]: I0630 14:13:58.356826    1571 scope.go:117] "RemoveContainer" containerID="bb7b02bbc5c8dd5b9894572995903031fc567d38b2a12418a019ff0ebbf7d45e"
	Jun 30 14:13:58 addons-412730 kubelet[1571]: I0630 14:13:58.357250    1571 scope.go:117] "RemoveContainer" containerID="a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd"
	Jun 30 14:13:58 addons-412730 kubelet[1571]: E0630 14:13:58.357502    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xjkv5_gadget(db71aa18-e2df-45dc-b69f-a6c5ad147ed0)\"" pod="gadget/gadget-xjkv5" podUID="db71aa18-e2df-45dc-b69f-a6c5ad147ed0"
	Jun 30 14:13:59 addons-412730 kubelet[1571]: I0630 14:13:59.961089    1571 scope.go:117] "RemoveContainer" containerID="a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd"
	Jun 30 14:13:59 addons-412730 kubelet[1571]: E0630 14:13:59.961355    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xjkv5_gadget(db71aa18-e2df-45dc-b69f-a6c5ad147ed0)\"" pod="gadget/gadget-xjkv5" podUID="db71aa18-e2df-45dc-b69f-a6c5ad147ed0"
	Jun 30 14:14:00 addons-412730 kubelet[1571]: I0630 14:14:00.364958    1571 scope.go:117] "RemoveContainer" containerID="a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd"
	Jun 30 14:14:00 addons-412730 kubelet[1571]: E0630 14:14:00.365203    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xjkv5_gadget(db71aa18-e2df-45dc-b69f-a6c5ad147ed0)\"" pod="gadget/gadget-xjkv5" podUID="db71aa18-e2df-45dc-b69f-a6c5ad147ed0"
	Jun 30 14:14:00 addons-412730 kubelet[1571]: I0630 14:14:00.444336    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jk4pf" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:14:01 addons-412730 kubelet[1571]: I0630 14:14:01.373424    1571 scope.go:117] "RemoveContainer" containerID="a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd"
	Jun 30 14:14:01 addons-412730 kubelet[1571]: E0630 14:14:01.374159    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xjkv5_gadget(db71aa18-e2df-45dc-b69f-a6c5ad147ed0)\"" pod="gadget/gadget-xjkv5" podUID="db71aa18-e2df-45dc-b69f-a6c5ad147ed0"
	Jun 30 14:14:05 addons-412730 kubelet[1571]: I0630 14:14:05.443103    1571 scope.go:117] "RemoveContainer" containerID="c553543b6c96d9c77554dc881cc9992dbe932b71bb7fcf925be75a4b2ebbda3d"
	Jun 30 14:14:05 addons-412730 kubelet[1571]: E0630 14:14:05.443814    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloud-spanner-emulator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cloud-spanner-emulator pod=cloud-spanner-emulator-6d967984f9-gqgvc_default(0920ab8a-8a65-4046-bebe-4d3e25cc6f9a)\"" pod="default/cloud-spanner-emulator-6d967984f9-gqgvc" podUID="0920ab8a-8a65-4046-bebe-4d3e25cc6f9a"
	Jun 30 14:14:08 addons-412730 kubelet[1571]: E0630 14:14:08.443713    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-854568c9bb-jfhvt" podUID="e37a78c0-cf90-49a3-bdb1-32ceb4f43f52"
	Jun 30 14:14:14 addons-412730 kubelet[1571]: I0630 14:14:14.443041    1571 scope.go:117] "RemoveContainer" containerID="a0b36f35dec942b05def7e53c48d516bead17cf1518e5d24a16632cfa4ccaefd"
	Jun 30 14:14:14 addons-412730 kubelet[1571]: E0630 14:14:14.443673    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xjkv5_gadget(db71aa18-e2df-45dc-b69f-a6c5ad147ed0)\"" pod="gadget/gadget-xjkv5" podUID="db71aa18-e2df-45dc-b69f-a6c5ad147ed0"
	Jun 30 14:14:16 addons-412730 kubelet[1571]: I0630 14:14:16.442924    1571 scope.go:117] "RemoveContainer" containerID="c553543b6c96d9c77554dc881cc9992dbe932b71bb7fcf925be75a4b2ebbda3d"
	Jun 30 14:14:16 addons-412730 kubelet[1571]: E0630 14:14:16.443685    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloud-spanner-emulator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cloud-spanner-emulator pod=cloud-spanner-emulator-6d967984f9-gqgvc_default(0920ab8a-8a65-4046-bebe-4d3e25cc6f9a)\"" pod="default/cloud-spanner-emulator-6d967984f9-gqgvc" podUID="0920ab8a-8a65-4046-bebe-4d3e25cc6f9a"
	Jun 30 14:14:17 addons-412730 kubelet[1571]: I0630 14:14:17.442963    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-694bd45846-xjdfn" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:14:19 addons-412730 kubelet[1571]: E0630 14:14:19.443869    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.12.1@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b24ea8af2d167a3525e8fc603b32eca6c9b46ef509fa7e87f09e1fadb992faf2: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-854568c9bb-jfhvt" podUID="e37a78c0-cf90-49a3-bdb1-32ceb4f43f52"
	
	
	==> storage-provisioner [60e507365f1d30c7beac2979b93ea374fc72f0bcfb17244185c70d7ea0c4da2b] <==
	W0630 14:14:01.962546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:03.965344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:03.973375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:05.978469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:05.984103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:07.987354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:07.996149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:09.999573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:10.006134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:12.010303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:12.019043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:14.023550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:14.029174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:16.032495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:16.038417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:18.042004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:18.049675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:20.054010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:20.059365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:22.062769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:22.068491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:24.071580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:24.076860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:26.083266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:14:26.093900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412730 -n addons-412730
helpers_test.go:261: (dbg) Run:  kubectl --context addons-412730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb registry-creds-6b69cdcdd5-kxnxr volcano-admission-init-lsxww volcano-scheduler-854568c9bb-jfhvt
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-412730 describe pod ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb registry-creds-6b69cdcdd5-kxnxr volcano-admission-init-lsxww volcano-scheduler-854568c9bb-jfhvt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-412730 describe pod ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb registry-creds-6b69cdcdd5-kxnxr volcano-admission-init-lsxww volcano-scheduler-854568c9bb-jfhvt: exit status 1 (64.385579ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gpszb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fl6cb" not found
	Error from server (NotFound): pods "registry-creds-6b69cdcdd5-kxnxr" not found
	Error from server (NotFound): pods "volcano-admission-init-lsxww" not found
	Error from server (NotFound): pods "volcano-scheduler-854568c9bb-jfhvt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-412730 describe pod ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb registry-creds-6b69cdcdd5-kxnxr volcano-admission-init-lsxww volcano-scheduler-854568c9bb-jfhvt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable volcano --alsologtostderr -v=1: (11.492681204s)
--- FAIL: TestAddons/serial/Volcano (374.36s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-412730 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-412730 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-412730 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [64454ac4-31e6-4e37-95db-f9dbfdbc92c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412730 -n addons-412730
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-06-30 14:23:12.393722395 +0000 UTC m=+1042.076849344
addons_test.go:252: (dbg) Run:  kubectl --context addons-412730 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-412730 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-412730/192.168.39.114
Start Time:       Mon, 30 Jun 2025 14:15:12 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpjf9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tpjf9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/nginx to addons-412730
Warning  Failed     8m                      kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    5m5s (x5 over 8m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m5s (x5 over 8m)       kubelet            Error: ErrImagePull
Warning  Failed     5m5s (x4 over 7m44s)    kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m56s (x20 over 7m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m41s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-412730 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-412730 logs nginx -n default: exit status 1 (79.132663ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-412730 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-412730 -n addons-412730
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 logs -n 25: (1.343786929s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-083943              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | --download-only -p                   | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | binary-mirror-278166                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42597               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-278166              | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| addons  | disable dashboard -p                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| start   | -p addons-412730 --wait=true         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:08 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | -p addons-412730                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-412730 ip                     | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | configure registry-creds -f          | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | ./testdata/addons_testconfig.json    |                      |         |         |                     |                     |
	|         | -p addons-412730                     |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable registry-creds               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:20 UTC | 30 Jun 25 14:21 UTC |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:21 UTC | 30 Jun 25 14:21 UTC |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:21 UTC | 30 Jun 25 14:21 UTC |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:06:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:06:06.240063 1460091 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:06:06.240209 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240221 1460091 out.go:358] Setting ErrFile to fd 2...
	I0630 14:06:06.240225 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240435 1460091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:06:06.241146 1460091 out.go:352] Setting JSON to false
	I0630 14:06:06.242162 1460091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49689,"bootTime":1751242677,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:06:06.242287 1460091 start.go:140] virtualization: kvm guest
	I0630 14:06:06.244153 1460091 out.go:177] * [addons-412730] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:06:06.245583 1460091 notify.go:220] Checking for updates...
	I0630 14:06:06.245617 1460091 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:06:06.246864 1460091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:06:06.248249 1460091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:06.249601 1460091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.251003 1460091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:06:06.252187 1460091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:06:06.253562 1460091 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:06:06.289858 1460091 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:06:06.291153 1460091 start.go:304] selected driver: kvm2
	I0630 14:06:06.291176 1460091 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:06:06.291195 1460091 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:06:06.292048 1460091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.292142 1460091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1452140/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:06:06.309060 1460091 install.go:137] /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:06:06.309119 1460091 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:06:06.309429 1460091 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:06:06.309479 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:06.309532 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:06.309546 1460091 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:06:06.309617 1460091 start.go:347] cluster config:
	{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0630 14:06:06.309739 1460091 iso.go:125] acquiring lock: {Name:mk3f178100d94eda06013511859d36adab64257f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.311683 1460091 out.go:177] * Starting "addons-412730" primary control-plane node in "addons-412730" cluster
	I0630 14:06:06.313225 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:06.313276 1460091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4
	I0630 14:06:06.313292 1460091 cache.go:56] Caching tarball of preloaded images
	I0630 14:06:06.313420 1460091 preload.go:172] Found /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0630 14:06:06.313435 1460091 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on containerd
	I0630 14:06:06.313766 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:06.313798 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json: {Name:mk9a7a41f109a1f3b7b9e5a38a0e2a1bce3a8d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:06.313975 1460091 start.go:360] acquireMachinesLock for addons-412730: {Name:mkb4b5035f5dd19ed6df4556a284e7c795570454 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:06:06.314058 1460091 start.go:364] duration metric: took 65.368µs to acquireMachinesLock for "addons-412730"
	I0630 14:06:06.314084 1460091 start.go:93] Provisioning new machine with config: &{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:06.314172 1460091 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:06:06.316769 1460091 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:06:06.316975 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:06.317044 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:06.332767 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0630 14:06:06.333480 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:06.334061 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:06.334083 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:06.334504 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:06.334801 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:06.335019 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:06.335217 1460091 start.go:159] libmachine.API.Create for "addons-412730" (driver="kvm2")
	I0630 14:06:06.335248 1460091 client.go:168] LocalClient.Create starting
	I0630 14:06:06.335289 1460091 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem
	I0630 14:06:06.483712 1460091 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem
	I0630 14:06:06.592251 1460091 main.go:141] libmachine: Running pre-create checks...
	I0630 14:06:06.592287 1460091 main.go:141] libmachine: (addons-412730) Calling .PreCreateCheck
	I0630 14:06:06.592947 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:06.593668 1460091 main.go:141] libmachine: Creating machine...
	I0630 14:06:06.593697 1460091 main.go:141] libmachine: (addons-412730) Calling .Create
	I0630 14:06:06.594139 1460091 main.go:141] libmachine: (addons-412730) creating KVM machine...
	I0630 14:06:06.594168 1460091 main.go:141] libmachine: (addons-412730) creating network...
	I0630 14:06:06.595936 1460091 main.go:141] libmachine: (addons-412730) DBG | found existing default KVM network
	I0630 14:06:06.596779 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.596550 1460113 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020ef20}
	I0630 14:06:06.596808 1460091 main.go:141] libmachine: (addons-412730) DBG | created network xml: 
	I0630 14:06:06.596818 1460091 main.go:141] libmachine: (addons-412730) DBG | <network>
	I0630 14:06:06.596822 1460091 main.go:141] libmachine: (addons-412730) DBG |   <name>mk-addons-412730</name>
	I0630 14:06:06.596828 1460091 main.go:141] libmachine: (addons-412730) DBG |   <dns enable='no'/>
	I0630 14:06:06.596832 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596839 1460091 main.go:141] libmachine: (addons-412730) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:06:06.596851 1460091 main.go:141] libmachine: (addons-412730) DBG |     <dhcp>
	I0630 14:06:06.596865 1460091 main.go:141] libmachine: (addons-412730) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:06:06.596872 1460091 main.go:141] libmachine: (addons-412730) DBG |     </dhcp>
	I0630 14:06:06.596877 1460091 main.go:141] libmachine: (addons-412730) DBG |   </ip>
	I0630 14:06:06.596883 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596888 1460091 main.go:141] libmachine: (addons-412730) DBG | </network>
	I0630 14:06:06.596897 1460091 main.go:141] libmachine: (addons-412730) DBG | 
	I0630 14:06:06.602938 1460091 main.go:141] libmachine: (addons-412730) DBG | trying to create private KVM network mk-addons-412730 192.168.39.0/24...
	I0630 14:06:06.682845 1460091 main.go:141] libmachine: (addons-412730) DBG | private KVM network mk-addons-412730 192.168.39.0/24 created
	I0630 14:06:06.682892 1460091 main.go:141] libmachine: (addons-412730) setting up store path in /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:06.682905 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.682807 1460113 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.682951 1460091 main.go:141] libmachine: (addons-412730) building disk image from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:06:06.682983 1460091 main.go:141] libmachine: (addons-412730) Downloading /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:06:06.983317 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.983139 1460113 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa...
	I0630 14:06:07.030013 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029839 1460113 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk...
	I0630 14:06:07.030043 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing magic tar header
	I0630 14:06:07.030053 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing SSH key tar header
	I0630 14:06:07.030061 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029966 1460113 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:07.030071 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730
	I0630 14:06:07.030150 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 (perms=drwx------)
	I0630 14:06:07.030175 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:06:07.030186 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines
	I0630 14:06:07.030199 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube (perms=drwxr-xr-x)
	I0630 14:06:07.030230 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140 (perms=drwxrwxr-x)
	I0630 14:06:07.030243 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:06:07.030249 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:07.030257 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140
	I0630 14:06:07.030272 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:06:07.030284 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:06:07.030316 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins
	I0630 14:06:07.030332 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home
	I0630 14:06:07.030374 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:07.030392 1460091 main.go:141] libmachine: (addons-412730) DBG | skipping /home - not owner
	I0630 14:06:07.031398 1460091 main.go:141] libmachine: (addons-412730) define libvirt domain using xml: 
	I0630 14:06:07.031420 1460091 main.go:141] libmachine: (addons-412730) <domain type='kvm'>
	I0630 14:06:07.031429 1460091 main.go:141] libmachine: (addons-412730)   <name>addons-412730</name>
	I0630 14:06:07.031435 1460091 main.go:141] libmachine: (addons-412730)   <memory unit='MiB'>4096</memory>
	I0630 14:06:07.031443 1460091 main.go:141] libmachine: (addons-412730)   <vcpu>2</vcpu>
	I0630 14:06:07.031449 1460091 main.go:141] libmachine: (addons-412730)   <features>
	I0630 14:06:07.031457 1460091 main.go:141] libmachine: (addons-412730)     <acpi/>
	I0630 14:06:07.031472 1460091 main.go:141] libmachine: (addons-412730)     <apic/>
	I0630 14:06:07.031484 1460091 main.go:141] libmachine: (addons-412730)     <pae/>
	I0630 14:06:07.031495 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.031506 1460091 main.go:141] libmachine: (addons-412730)   </features>
	I0630 14:06:07.031515 1460091 main.go:141] libmachine: (addons-412730)   <cpu mode='host-passthrough'>
	I0630 14:06:07.031524 1460091 main.go:141] libmachine: (addons-412730)   
	I0630 14:06:07.031534 1460091 main.go:141] libmachine: (addons-412730)   </cpu>
	I0630 14:06:07.031544 1460091 main.go:141] libmachine: (addons-412730)   <os>
	I0630 14:06:07.031554 1460091 main.go:141] libmachine: (addons-412730)     <type>hvm</type>
	I0630 14:06:07.031563 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='cdrom'/>
	I0630 14:06:07.031572 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='hd'/>
	I0630 14:06:07.031581 1460091 main.go:141] libmachine: (addons-412730)     <bootmenu enable='no'/>
	I0630 14:06:07.031597 1460091 main.go:141] libmachine: (addons-412730)   </os>
	I0630 14:06:07.031609 1460091 main.go:141] libmachine: (addons-412730)   <devices>
	I0630 14:06:07.031619 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='cdrom'>
	I0630 14:06:07.031636 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/boot2docker.iso'/>
	I0630 14:06:07.031647 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hdc' bus='scsi'/>
	I0630 14:06:07.031659 1460091 main.go:141] libmachine: (addons-412730)       <readonly/>
	I0630 14:06:07.031667 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031679 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='disk'>
	I0630 14:06:07.031689 1460091 main.go:141] libmachine: (addons-412730)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:06:07.031737 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk'/>
	I0630 14:06:07.031764 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hda' bus='virtio'/>
	I0630 14:06:07.031774 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031792 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031805 1460091 main.go:141] libmachine: (addons-412730)       <source network='mk-addons-412730'/>
	I0630 14:06:07.031820 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031854 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031878 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031890 1460091 main.go:141] libmachine: (addons-412730)       <source network='default'/>
	I0630 14:06:07.031901 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031909 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031919 1460091 main.go:141] libmachine: (addons-412730)     <serial type='pty'>
	I0630 14:06:07.031927 1460091 main.go:141] libmachine: (addons-412730)       <target port='0'/>
	I0630 14:06:07.031942 1460091 main.go:141] libmachine: (addons-412730)     </serial>
	I0630 14:06:07.031951 1460091 main.go:141] libmachine: (addons-412730)     <console type='pty'>
	I0630 14:06:07.031964 1460091 main.go:141] libmachine: (addons-412730)       <target type='serial' port='0'/>
	I0630 14:06:07.031975 1460091 main.go:141] libmachine: (addons-412730)     </console>
	I0630 14:06:07.031982 1460091 main.go:141] libmachine: (addons-412730)     <rng model='virtio'>
	I0630 14:06:07.031995 1460091 main.go:141] libmachine: (addons-412730)       <backend model='random'>/dev/random</backend>
	I0630 14:06:07.032001 1460091 main.go:141] libmachine: (addons-412730)     </rng>
	I0630 14:06:07.032011 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032016 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032026 1460091 main.go:141] libmachine: (addons-412730)   </devices>
	I0630 14:06:07.032034 1460091 main.go:141] libmachine: (addons-412730) </domain>
	I0630 14:06:07.032066 1460091 main.go:141] libmachine: (addons-412730) 
	I0630 14:06:07.037044 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:0d:7b:07 in network default
	I0630 14:06:07.037851 1460091 main.go:141] libmachine: (addons-412730) starting domain...
	I0630 14:06:07.037899 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:07.037908 1460091 main.go:141] libmachine: (addons-412730) ensuring networks are active...
	I0630 14:06:07.038725 1460091 main.go:141] libmachine: (addons-412730) Ensuring network default is active
	I0630 14:06:07.039106 1460091 main.go:141] libmachine: (addons-412730) Ensuring network mk-addons-412730 is active
	I0630 14:06:07.039715 1460091 main.go:141] libmachine: (addons-412730) getting domain XML...
	I0630 14:06:07.040672 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:08.319736 1460091 main.go:141] libmachine: (addons-412730) waiting for IP...
	I0630 14:06:08.320757 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.321298 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.321358 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.321305 1460113 retry.go:31] will retry after 217.608702ms: waiting for domain to come up
	I0630 14:06:08.541088 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.541707 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.541732 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.541668 1460113 retry.go:31] will retry after 322.22603ms: waiting for domain to come up
	I0630 14:06:08.865505 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.865965 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.865994 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.865925 1460113 retry.go:31] will retry after 339.049792ms: waiting for domain to come up
	I0630 14:06:09.206655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.207155 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.207213 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.207148 1460113 retry.go:31] will retry after 478.054487ms: waiting for domain to come up
	I0630 14:06:09.686885 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.687397 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.687426 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.687347 1460113 retry.go:31] will retry after 663.338232ms: waiting for domain to come up
	I0630 14:06:10.352433 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:10.352917 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:10.352942 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:10.352876 1460113 retry.go:31] will retry after 824.880201ms: waiting for domain to come up
	I0630 14:06:11.179557 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:11.180050 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:11.180081 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:11.180000 1460113 retry.go:31] will retry after 1.072535099s: waiting for domain to come up
	I0630 14:06:12.253993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:12.254526 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:12.254560 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:12.254433 1460113 retry.go:31] will retry after 1.120902402s: waiting for domain to come up
	I0630 14:06:13.376695 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:13.377283 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:13.377315 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:13.377244 1460113 retry.go:31] will retry after 1.419759095s: waiting for domain to come up
	I0630 14:06:14.799069 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:14.799546 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:14.799574 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:14.799514 1460113 retry.go:31] will retry after 1.843918596s: waiting for domain to come up
	I0630 14:06:16.645512 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:16.646025 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:16.646082 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:16.646003 1460113 retry.go:31] will retry after 2.785739179s: waiting for domain to come up
	I0630 14:06:19.434426 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:19.435055 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:19.435086 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:19.434987 1460113 retry.go:31] will retry after 2.736128675s: waiting for domain to come up
	I0630 14:06:22.172470 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:22.173071 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:22.173092 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:22.173042 1460113 retry.go:31] will retry after 3.042875133s: waiting for domain to come up
	I0630 14:06:25.219310 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:25.219910 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:25.219934 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:25.219869 1460113 retry.go:31] will retry after 4.255226103s: waiting for domain to come up
	I0630 14:06:29.478898 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479625 1460091 main.go:141] libmachine: (addons-412730) found domain IP: 192.168.39.114
	I0630 14:06:29.479653 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has current primary IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479661 1460091 main.go:141] libmachine: (addons-412730) reserving static IP address...
	I0630 14:06:29.480160 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find host DHCP lease matching {name: "addons-412730", mac: "52:54:00:ac:59:ff", ip: "192.168.39.114"} in network mk-addons-412730
	I0630 14:06:29.563376 1460091 main.go:141] libmachine: (addons-412730) reserved static IP address 192.168.39.114 for domain addons-412730
	I0630 14:06:29.563409 1460091 main.go:141] libmachine: (addons-412730) waiting for SSH...
	I0630 14:06:29.563418 1460091 main.go:141] libmachine: (addons-412730) DBG | Getting to WaitForSSH function...
	I0630 14:06:29.566605 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567079 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.567114 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567268 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH client type: external
	I0630 14:06:29.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa (-rw-------)
	I0630 14:06:29.567351 1460091 main.go:141] libmachine: (addons-412730) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:06:29.567371 1460091 main.go:141] libmachine: (addons-412730) DBG | About to run SSH command:
	I0630 14:06:29.567386 1460091 main.go:141] libmachine: (addons-412730) DBG | exit 0
	I0630 14:06:29.697378 1460091 main.go:141] libmachine: (addons-412730) DBG | SSH cmd err, output: <nil>: 
	I0630 14:06:29.697644 1460091 main.go:141] libmachine: (addons-412730) KVM machine creation complete
	I0630 14:06:29.698028 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:29.698656 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.698905 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.699080 1460091 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:06:29.699098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:29.700512 1460091 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:06:29.700530 1460091 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:06:29.700538 1460091 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:06:29.700545 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.702878 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703363 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.703393 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703678 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.703917 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704093 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704253 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.704472 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.704757 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.704772 1460091 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:06:29.825352 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:29.825394 1460091 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:06:29.825405 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.828698 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829249 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.829291 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.829702 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.829910 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.830086 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.830284 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.830503 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.830515 1460091 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:06:29.950727 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:06:29.950815 1460091 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:06:29.950829 1460091 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:06:29.950838 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951114 1460091 buildroot.go:166] provisioning hostname "addons-412730"
	I0630 14:06:29.951153 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951406 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.954775 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955251 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.955283 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955448 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.955676 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.955864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.956131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.956359 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.956598 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.956616 1460091 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-412730 && echo "addons-412730" | sudo tee /etc/hostname
	I0630 14:06:30.091933 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-412730
	
	I0630 14:06:30.091974 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.095576 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.095967 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.095993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.096193 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.096420 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096640 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096775 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.096955 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.097249 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.097278 1460091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-412730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-412730/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-412730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:06:30.228409 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:30.228455 1460091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1452140/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1452140/.minikube}
	I0630 14:06:30.228507 1460091 buildroot.go:174] setting up certificates
	I0630 14:06:30.228539 1460091 provision.go:84] configureAuth start
	I0630 14:06:30.228557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:30.228999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.232598 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233018 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.233052 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233306 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.235934 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236310 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.236353 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236511 1460091 provision.go:143] copyHostCerts
	I0630 14:06:30.236588 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.pem (1078 bytes)
	I0630 14:06:30.236717 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/cert.pem (1123 bytes)
	I0630 14:06:30.236771 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/key.pem (1675 bytes)
	I0630 14:06:30.236826 1460091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem org=jenkins.addons-412730 san=[127.0.0.1 192.168.39.114 addons-412730 localhost minikube]
	I0630 14:06:30.629859 1460091 provision.go:177] copyRemoteCerts
	I0630 14:06:30.629936 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:06:30.629965 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.633589 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634037 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.634067 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634292 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.634709 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.634951 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.635149 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.732351 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:06:30.765263 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:06:30.797980 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:06:30.829589 1460091 provision.go:87] duration metric: took 601.031936ms to configureAuth
	I0630 14:06:30.829626 1460091 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:06:30.829835 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:30.829875 1460091 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:06:30.829891 1460091 main.go:141] libmachine: (addons-412730) Calling .GetURL
	I0630 14:06:30.831493 1460091 main.go:141] libmachine: (addons-412730) DBG | using libvirt version 6000000
	I0630 14:06:30.834168 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834575 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.834608 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834836 1460091 main.go:141] libmachine: Docker is up and running!
	I0630 14:06:30.834858 1460091 main.go:141] libmachine: Reticulating splines...
	I0630 14:06:30.834867 1460091 client.go:171] duration metric: took 24.499610068s to LocalClient.Create
	I0630 14:06:30.834910 1460091 start.go:167] duration metric: took 24.499694666s to libmachine.API.Create "addons-412730"
	I0630 14:06:30.834925 1460091 start.go:293] postStartSetup for "addons-412730" (driver="kvm2")
	I0630 14:06:30.834938 1460091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:06:30.834971 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:30.835263 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:06:30.835291 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.837701 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838027 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.838070 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838230 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.838425 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.838615 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.838765 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.930536 1460091 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:06:30.935492 1460091 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:06:30.935534 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/addons for local assets ...
	I0630 14:06:30.935631 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/files for local assets ...
	I0630 14:06:30.935674 1460091 start.go:296] duration metric: took 100.742963ms for postStartSetup
	I0630 14:06:30.935713 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:30.936417 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.939655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940194 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.940223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940486 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:30.940676 1460091 start.go:128] duration metric: took 24.626491157s to createHost
	I0630 14:06:30.940701 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.943451 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.943947 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.943979 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.944167 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.944383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.944979 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.945339 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.945363 1460091 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:06:31.062586 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751292391.035640439
	
	I0630 14:06:31.062617 1460091 fix.go:216] guest clock: 1751292391.035640439
	I0630 14:06:31.062625 1460091 fix.go:229] Guest: 2025-06-30 14:06:31.035640439 +0000 UTC Remote: 2025-06-30 14:06:30.940689328 +0000 UTC m=+24.741258527 (delta=94.951111ms)
	I0630 14:06:31.062664 1460091 fix.go:200] guest clock delta is within tolerance: 94.951111ms
	I0630 14:06:31.062669 1460091 start.go:83] releasing machines lock for "addons-412730", held for 24.748599614s
	I0630 14:06:31.062697 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.063068 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:31.066256 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.066740 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.066774 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.067022 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067620 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067907 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.068104 1460091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:06:31.068165 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.068221 1460091 ssh_runner.go:195] Run: cat /version.json
	I0630 14:06:31.068250 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.071486 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.071690 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072008 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072043 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072103 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072204 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072375 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072476 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072559 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072632 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072686 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072859 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.072867 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.159582 1460091 ssh_runner.go:195] Run: systemctl --version
	I0630 14:06:31.186817 1460091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:06:31.193553 1460091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:06:31.193649 1460091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:06:31.215105 1460091 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:06:31.215137 1460091 start.go:495] detecting cgroup driver to use...
	I0630 14:06:31.215213 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0630 14:06:31.257543 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0630 14:06:31.273400 1460091 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:06:31.273466 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:06:31.289789 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:06:31.306138 1460091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:06:31.453571 1460091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:06:31.593173 1460091 docker.go:246] disabling docker service ...
	I0630 14:06:31.593260 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:06:31.610223 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:06:31.625803 1460091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:06:31.823510 1460091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:06:31.974811 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:06:31.996098 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:06:32.020154 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0630 14:06:32.033292 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0630 14:06:32.046251 1460091 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0630 14:06:32.046373 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0630 14:06:32.059569 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.072460 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0630 14:06:32.085242 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.098259 1460091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:06:32.111503 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0630 14:06:32.124063 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0630 14:06:32.136348 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0630 14:06:32.148960 1460091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:06:32.159881 1460091 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:06:32.159967 1460091 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:06:32.176065 1460091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:06:32.188348 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:32.325076 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:32.359838 1460091 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0630 14:06:32.359979 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.366616 1460091 retry.go:31] will retry after 624.469247ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0630 14:06:32.991518 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.997598 1460091 start.go:563] Will wait 60s for crictl version
	I0630 14:06:32.997677 1460091 ssh_runner.go:195] Run: which crictl
	I0630 14:06:33.002325 1460091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:06:33.045054 1460091 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0630 14:06:33.045186 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.074290 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.134404 1460091 out.go:177] * Preparing Kubernetes v1.33.2 on containerd 1.7.23 ...
	I0630 14:06:33.198052 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:33.201668 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202138 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:33.202162 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202486 1460091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:06:33.207929 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:33.224479 1460091 kubeadm.go:875] updating cluster {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412
730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:06:33.224651 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:33.224723 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:33.262407 1460091 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:06:33.262480 1460091 ssh_runner.go:195] Run: which lz4
	I0630 14:06:33.267241 1460091 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:06:33.272514 1460091 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:06:33.272561 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (420558900 bytes)
	I0630 14:06:34.883083 1460091 containerd.go:563] duration metric: took 1.615882395s to copy over tarball
	I0630 14:06:34.883194 1460091 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:06:36.966670 1460091 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08344467s)
	I0630 14:06:36.966710 1460091 containerd.go:570] duration metric: took 2.083586834s to extract the tarball
	I0630 14:06:36.966722 1460091 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:06:37.007649 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.150742 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:37.193070 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.245622 1460091 retry.go:31] will retry after 173.895536ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-06-30T14:06:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0630 14:06:37.420139 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.464724 1460091 containerd.go:627] all images are preloaded for containerd runtime.
	I0630 14:06:37.464758 1460091 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:06:37.464771 1460091 kubeadm.go:926] updating node { 192.168.39.114 8443 v1.33.2 containerd true true} ...
	I0630 14:06:37.464919 1460091 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-412730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:06:37.465002 1460091 ssh_runner.go:195] Run: sudo crictl info
	I0630 14:06:37.511001 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:37.511034 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:37.511049 1460091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:06:37.511083 1460091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-412730 NodeName:addons-412730 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:06:37.511271 1460091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-412730"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:06:37.511357 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:06:37.525652 1460091 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:06:37.525746 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:06:37.538805 1460091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0630 14:06:37.562031 1460091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:06:37.587566 1460091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0630 14:06:37.610218 1460091 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I0630 14:06:37.615571 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:37.632131 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.779642 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:37.816746 1460091 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730 for IP: 192.168.39.114
	I0630 14:06:37.816781 1460091 certs.go:194] generating shared ca certs ...
	I0630 14:06:37.816801 1460091 certs.go:226] acquiring lock for ca certs: {Name:mk0651a034eff71720267efe75974a64ed116095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:37.816978 1460091 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key
	I0630 14:06:38.156994 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt ...
	I0630 14:06:38.157034 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt: {Name:mkd96adf4b8dd000ef155465cd7541cb4dbc54f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157267 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key ...
	I0630 14:06:38.157285 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key: {Name:mk6da24087206aaf4a1c31ab7ae44030109e489f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157410 1460091 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key
	I0630 14:06:38.393807 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt ...
	I0630 14:06:38.393842 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt: {Name:mk321b6cabce084092be365d32608954916437e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394011 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key ...
	I0630 14:06:38.394022 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key: {Name:mk82210dbfc17828b961241482db840048e12b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394107 1460091 certs.go:256] generating profile certs ...
	I0630 14:06:38.394167 1460091 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key
	I0630 14:06:38.394181 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt with IP's: []
	I0630 14:06:39.030200 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt ...
	I0630 14:06:39.030240 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: {Name:mkc9df953aca8566f0870f2298300ff89b509f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030418 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key ...
	I0630 14:06:39.030431 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key: {Name:mka533b0514825fa7b24c00fc43d73342f608e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030498 1460091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367
	I0630 14:06:39.030521 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114]
	I0630 14:06:39.110277 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 ...
	I0630 14:06:39.110319 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367: {Name:mk48ce6fc18dec0b61c5b66960071aff2a24b262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110478 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 ...
	I0630 14:06:39.110491 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367: {Name:mk75d3bfb9efccf05811ea90591687efdb3f8988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110564 1460091 certs.go:381] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt
	I0630 14:06:39.110641 1460091 certs.go:385] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key
	I0630 14:06:39.110691 1460091 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key
	I0630 14:06:39.110708 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt with IP's: []
	I0630 14:06:39.311094 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt ...
	I0630 14:06:39.311131 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt: {Name:mkc683f67a11502b5bdeac9ab79459fda8dea4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311302 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key ...
	I0630 14:06:39.311315 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key: {Name:mk896db09a1f34404a9d7ba2ae83a6472f785239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311491 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:06:39.311529 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:06:39.311552 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:06:39.311574 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem (1675 bytes)
	I0630 14:06:39.312289 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:06:39.348883 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:06:39.387215 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:06:39.418089 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0630 14:06:39.456310 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:06:39.485942 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:06:39.518368 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:06:39.550454 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 14:06:39.582512 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:06:39.617828 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:06:39.640030 1460091 ssh_runner.go:195] Run: openssl version
	I0630 14:06:39.647364 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:06:39.660898 1460091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666460 1460091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666541 1460091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.674132 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:06:39.687542 1460091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:06:39.692849 1460091 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:06:39.692930 1460091 kubeadm.go:392] StartCluster: {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:06:39.693042 1460091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0630 14:06:39.693124 1460091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:06:39.733818 1460091 cri.go:89] found id: ""
	I0630 14:06:39.733920 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:06:39.748350 1460091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:06:39.762340 1460091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:06:39.774501 1460091 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:06:39.774532 1460091 kubeadm.go:157] found existing configuration files:
	
	I0630 14:06:39.774596 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:06:39.786405 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:06:39.786474 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:06:39.798586 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:06:39.809858 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:06:39.809932 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:06:39.822150 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.833619 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:06:39.833683 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.845682 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:06:39.856947 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:06:39.857015 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:06:39.870036 1460091 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:06:39.922555 1460091 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:06:39.922624 1460091 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:06:40.045815 1460091 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:06:40.045999 1460091 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:06:40.046138 1460091 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:06:40.052549 1460091 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:06:40.071818 1460091 out.go:235]   - Generating certificates and keys ...
	I0630 14:06:40.071955 1460091 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:06:40.072042 1460091 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:06:40.453325 1460091 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:06:40.505817 1460091 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:06:41.044548 1460091 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:06:41.417521 1460091 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:06:41.739226 1460091 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:06:41.739421 1460091 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:41.843371 1460091 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:06:41.843539 1460091 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:42.399109 1460091 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:06:42.840033 1460091 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:06:43.009726 1460091 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:06:43.009824 1460091 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:06:43.506160 1460091 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:06:43.698222 1460091 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:06:43.840816 1460091 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:06:44.231431 1460091 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:06:44.461049 1460091 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:06:44.461356 1460091 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:06:44.463997 1460091 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:06:44.465945 1460091 out.go:235]   - Booting up control plane ...
	I0630 14:06:44.466073 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:06:44.466167 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:06:44.466289 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:06:44.484244 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:06:44.494126 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:06:44.494220 1460091 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:06:44.678804 1460091 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:06:44.678979 1460091 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:06:45.689158 1460091 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.011115741s
	I0630 14:06:45.693304 1460091 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:06:45.693435 1460091 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.114:8443/livez
	I0630 14:06:45.694157 1460091 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:06:45.694324 1460091 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:06:48.529853 1460091 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.836599214s
	I0630 14:06:49.645556 1460091 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.952842655s
	I0630 14:06:51.692654 1460091 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.00153129s
	I0630 14:06:51.707013 1460091 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:06:51.730537 1460091 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:06:51.769844 1460091 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:06:51.770065 1460091 kubeadm.go:310] [mark-control-plane] Marking the node addons-412730 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:06:51.785586 1460091 kubeadm.go:310] [bootstrap-token] Using token: ggslqu.tjlqizciadnjmkc4
	I0630 14:06:51.787072 1460091 out.go:235]   - Configuring RBAC rules ...
	I0630 14:06:51.787249 1460091 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:06:51.798527 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:06:51.808767 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:06:51.813113 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:06:51.818246 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:06:51.822008 1460091 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:06:52.099709 1460091 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:06:52.594117 1460091 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:06:53.099418 1460091 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:06:53.100502 1460091 kubeadm.go:310] 
	I0630 14:06:53.100601 1460091 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:06:53.100613 1460091 kubeadm.go:310] 
	I0630 14:06:53.100755 1460091 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:06:53.100795 1460091 kubeadm.go:310] 
	I0630 14:06:53.100858 1460091 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:06:53.100965 1460091 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:06:53.101053 1460091 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:06:53.101065 1460091 kubeadm.go:310] 
	I0630 14:06:53.101171 1460091 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:06:53.101191 1460091 kubeadm.go:310] 
	I0630 14:06:53.101279 1460091 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:06:53.101291 1460091 kubeadm.go:310] 
	I0630 14:06:53.101389 1460091 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:06:53.101534 1460091 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:06:53.101651 1460091 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:06:53.101664 1460091 kubeadm.go:310] 
	I0630 14:06:53.101782 1460091 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:06:53.101913 1460091 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:06:53.101931 1460091 kubeadm.go:310] 
	I0630 14:06:53.102062 1460091 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102204 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 \
	I0630 14:06:53.102237 1460091 kubeadm.go:310] 	--control-plane 
	I0630 14:06:53.102246 1460091 kubeadm.go:310] 
	I0630 14:06:53.102351 1460091 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:06:53.102362 1460091 kubeadm.go:310] 
	I0630 14:06:53.102448 1460091 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102611 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 
	I0630 14:06:53.104820 1460091 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:06:53.104859 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:53.104869 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:53.106742 1460091 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:06:53.108147 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:06:53.121105 1460091 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:06:53.146410 1460091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:06:53.146477 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.146567 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-412730 minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-412730 minikube.k8s.io/primary=true
	I0630 14:06:53.306096 1460091 ops.go:34] apiserver oom_adj: -16
	I0630 14:06:53.306244 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.806580 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.306722 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.807256 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.306344 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.807179 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.306640 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.807184 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.895027 1460091 kubeadm.go:1105] duration metric: took 3.748614141s to wait for elevateKubeSystemPrivileges
	I0630 14:06:56.895079 1460091 kubeadm.go:394] duration metric: took 17.202154504s to StartCluster
	I0630 14:06:56.895108 1460091 settings.go:142] acquiring lock: {Name:mk841f56cd7a9b39ff7ba20d8e74be5d85ec1f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895268 1460091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:56.895670 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/kubeconfig: {Name:mkaf116de3c28eb3dfd9964f3211c065b2db02a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895901 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:06:56.895932 1460091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:56.895997 1460091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:06:56.896117 1460091 addons.go:69] Setting yakd=true in profile "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:238] Setting addon yakd=true in "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:69] Setting ingress=true in profile "addons-412730"
	I0630 14:06:56.896159 1460091 addons.go:238] Setting addon ingress=true in "addons-412730"
	I0630 14:06:56.896176 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896165 1460091 addons.go:69] Setting registry=true in profile "addons-412730"
	I0630 14:06:56.896200 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896203 1460091 addons.go:238] Setting addon registry=true in "addons-412730"
	I0630 14:06:56.896203 1460091 addons.go:69] Setting inspektor-gadget=true in profile "addons-412730"
	I0630 14:06:56.896223 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.896233 1460091 addons.go:238] Setting addon inspektor-gadget=true in "addons-412730"
	I0630 14:06:56.896223 1460091 addons.go:69] Setting metrics-server=true in profile "addons-412730"
	I0630 14:06:56.896245 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896253 1460091 addons.go:238] Setting addon metrics-server=true in "addons-412730"
	I0630 14:06:56.896265 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896276 1460091 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896285 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896287 1460091 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-412730"
	I0630 14:06:56.896305 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896570 1460091 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-412730"
	I0630 14:06:56.896661 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-412730"
	I0630 14:06:56.896723 1460091 addons.go:69] Setting volcano=true in profile "addons-412730"
	I0630 14:06:56.896778 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896785 1460091 addons.go:69] Setting registry-creds=true in profile "addons-412730"
	I0630 14:06:56.896751 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896799 1460091 addons.go:69] Setting volumesnapshots=true in profile "addons-412730"
	I0630 14:06:56.896804 1460091 addons.go:238] Setting addon registry-creds=true in "addons-412730"
	I0630 14:06:56.896811 1460091 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896816 1460091 addons.go:238] Setting addon volumesnapshots=true in "addons-412730"
	I0630 14:06:56.896825 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896830 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896835 1460091 addons.go:69] Setting cloud-spanner=true in profile "addons-412730"
	I0630 14:06:56.896838 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896836 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896852 1460091 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-412730"
	I0630 14:06:56.896876 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896897 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896918 1460091 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:06:56.896941 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897097 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897165 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897187 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897280 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897295 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896826 1460091 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-412730"
	I0630 14:06:56.897181 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897361 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896845 1460091 addons.go:238] Setting addon cloud-spanner=true in "addons-412730"
	I0630 14:06:56.897199 1460091 addons.go:69] Setting storage-provisioner=true in profile "addons-412730"
	I0630 14:06:56.897456 1460091 addons.go:238] Setting addon storage-provisioner=true in "addons-412730"
	I0630 14:06:56.897488 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897499 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897606 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897876 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897886 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897898 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897978 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896791 1460091 addons.go:238] Setting addon volcano=true in "addons-412730"
	I0630 14:06:56.898062 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896771 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898162 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896767 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898520 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897212 1460091 addons.go:69] Setting default-storageclass=true in profile "addons-412730"
	I0630 14:06:56.898795 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-412730"
	I0630 14:06:56.899315 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899389 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897224 1460091 addons.go:69] Setting gcp-auth=true in profile "addons-412730"
	I0630 14:06:56.899644 1460091 mustload.go:65] Loading cluster: addons-412730
	I0630 14:06:56.897241 1460091 addons.go:69] Setting ingress-dns=true in profile "addons-412730"
	I0630 14:06:56.899700 1460091 addons.go:238] Setting addon ingress-dns=true in "addons-412730"
	I0630 14:06:56.899796 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896785 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899911 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897328 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.899604 1460091 out.go:177] * Verifying Kubernetes components...
	I0630 14:06:56.915173 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:56.925317 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0630 14:06:56.933471 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0630 14:06:56.933567 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.933596 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0630 14:06:56.934049 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934108 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934159 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934204 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934401 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934443 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.938799 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0630 14:06:56.939041 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0630 14:06:56.939193 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0630 14:06:56.939457 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0630 14:06:56.939729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940028 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940309 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.940326 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.940413 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940931 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941099 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.941112 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.941179 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.941232 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941301 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941738 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.941788 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.942491 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942515 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942624 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.942661 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942683 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942765 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.942792 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942805 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943018 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943153 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943163 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943215 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.943262 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944142 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944175 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944193 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944211 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944294 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944358 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.945770 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.945856 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.946237 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.946282 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.947082 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.947128 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.948967 1460091 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-412730"
	I0630 14:06:56.949015 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.949453 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.949501 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.962217 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.962296 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.973604 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0630 14:06:56.974149 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.974664 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.974695 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.975099 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.975299 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.975756 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0630 14:06:56.977204 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.977635 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.977698 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.977979 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.978793 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.978814 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.979233 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.979861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.979908 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.983635 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0630 14:06:56.984067 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0630 14:06:56.984613 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.985289 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.985309 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.985797 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.986422 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.986466 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.987326 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0630 14:06:56.987554 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I0630 14:06:56.988111 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.988781 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.988800 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.988868 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0630 14:06:56.989272 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989514 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989982 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990005 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990076 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990136 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990167 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990395 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990688 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.990745 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.991420 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.992366 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.992419 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.992669 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I0630 14:06:56.993907 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.995228 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.995248 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.995880 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.997265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.999293 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0630 14:06:56.999370 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.001508 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:06:57.002883 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:06:57.002916 1460091 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:06:57.002942 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.003610 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0630 14:06:57.005195 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0630 14:06:57.005935 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.005991 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.006255 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0630 14:06:57.006289 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006456 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006802 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.007205 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0630 14:06:57.007321 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0630 14:06:57.007438 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007452 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007601 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007616 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007742 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007767 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.008050 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008112 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.008285 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008675 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.008703 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.008723 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.008787 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.008808 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.009263 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009378 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.009421 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009781 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.010031 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.010108 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.010355 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010373 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010513 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010533 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010629 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.010969 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.010977 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.011283 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.011304 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.011392 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.011650 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.011783 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.011867 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012379 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.012423 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012599 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.012859 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.012877 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.013047 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.013778 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.014215 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.014495 1460091 addons.go:238] Setting addon default-storageclass=true in "addons-412730"
	I0630 14:06:57.014541 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:57.014778 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.014972 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.015012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.015647 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017091 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017305 1460091 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:06:57.017315 1460091 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:06:57.019235 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:06:57.019245 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:06:57.019258 1460091 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:06:57.019263 1460091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019356 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:06:57.020515 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45803
	I0630 14:06:57.020579 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:57.020596 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:06:57.020635 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.021372 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.021977 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.022038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.022485 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.023104 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.023180 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.023405 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.023860 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.023897 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.025612 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.025864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.025948 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0630 14:06:57.026240 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.026420 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.026868 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.028570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029396 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029420 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029587 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.029699 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.029761 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029777 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029959 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030089 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.030322 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.030383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030669 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.031123 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.031274 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.031289 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.031683 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.037907 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.038177 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.039744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0630 14:06:57.039978 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0630 14:06:57.040537 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.040729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.041308 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041328 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041600 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041615 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041928 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.042164 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.042315 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.044033 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0630 14:06:57.044725 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.045331 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.045350 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.045878 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.045938 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.046425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0630 14:06:57.047116 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.047396 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.047496 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.048257 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.048279 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.048498 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049312 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049440 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.049911 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.050622 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:06:57.050709 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.051429 1460091 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:06:57.051993 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.053508 1460091 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:57.053531 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:06:57.053554 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.054413 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42375
	I0630 14:06:57.054437 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.054478 1460091 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:06:57.054413 1460091 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:06:57.054933 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.055768 1460091 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:57.055790 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:06:57.055812 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.055852 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.055876 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.056303 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.056581 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:06:57.056594 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.056599 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:06:57.056622 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.057388 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:06:57.058752 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:57.058770 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:06:57.058788 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.059503 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060288 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.060307 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060551 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.060762 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.060918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.060980 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.061036 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.061516 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0630 14:06:57.062190 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.062207 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062733 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.062771 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.062855 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.062894 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.063152 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.063283 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.063407 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.063631 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.1
	I0630 14:06:57.063848 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.063854 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0630 14:06:57.063891 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0630 14:06:57.064349 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.064387 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064484 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064596 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064660 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064704 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.064881 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.064942 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.065098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.065315 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.065331 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.065402 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.065624 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.066156 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.066196 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.066203 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.1
	I0630 14:06:57.066852 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.066874 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.066915 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0630 14:06:57.067252 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.067449 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.067944 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.068048 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.068097 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.068228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.068613 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.068623 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.068822 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.068891 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.1
	I0630 14:06:57.069115 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.069121 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069356 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0630 14:06:57.069576 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.070270 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.070286 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.070342 1460091 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:06:57.071005 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.071129 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.071152 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.071943 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.071951 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:06:57.071970 1460091 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:06:57.071992 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.072108 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.072154 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.072685 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:06:57.072774 1460091 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.072798 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498069 bytes)
	I0630 14:06:57.072818 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.073341 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.074059 1460091 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:06:57.074063 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:57.074155 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:06:57.074179 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.075067 1460091 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:06:57.075229 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:57.075246 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:06:57.075572 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.076243 1460091 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.076303 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:06:57.076329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.078812 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43631
	I0630 14:06:57.079025 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.079130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.079652 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.080327 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.080351 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.080481 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.080507 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.080634 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.080858 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.081036 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.081055 1460091 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:06:57.081228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.081763 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.082138 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082262 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082706 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.082752 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.083020 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083040 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083087 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083100 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.083494 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.083497 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083593 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083786 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.083977 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.084112 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.084235 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.084469 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084506 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.084520 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084738 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.084918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085065 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085095 1460091 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:06:57.085067 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.085223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.085318 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.085373 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.085526 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085673 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085865 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.086430 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:57.086442 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:06:57.086455 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.087486 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0630 14:06:57.087965 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.088516 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.088545 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.089121 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.089329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.089866 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090528 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.090554 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090740 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.090964 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.091072 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.091131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.091254 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.092992 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:06:57.094599 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:06:57.095998 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:06:57.097039 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:06:57.098265 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:06:57.099547 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:06:57.100645 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:06:57.101875 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:06:57.103299 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:06:57.103321 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:06:57.103347 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.107000 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0630 14:06:57.107083 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107594 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.107627 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.107650 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107840 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.108051 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.108244 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.108441 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.108455 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.108453 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.108913 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.109191 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.111002 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.111252 1460091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:57.111268 1460091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:06:57.111288 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.114635 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115172 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.115248 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115422 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.115624 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.115796 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.115964 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	W0630 14:06:57.363795 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.363842 1460091 retry.go:31] will retry after 315.136796ms: ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	W0630 14:06:57.364018 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.364049 1460091 retry.go:31] will retry after 155.525336ms: ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.701875 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:06:57.701976 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:57.837038 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:06:57.837063 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:06:57.838628 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.843008 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:06:57.843041 1460091 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:06:57.872159 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.909976 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:57.910010 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:06:57.932688 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:06:57.932733 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:06:57.995639 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:58.066461 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:58.080857 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:06:58.080899 1460091 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:06:58.095890 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:58.137462 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:58.206306 1460091 node_ready.go:35] waiting up to 6m0s for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209015 1460091 node_ready.go:49] node "addons-412730" is "Ready"
	I0630 14:06:58.209060 1460091 node_ready.go:38] duration metric: took 2.705097ms for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209080 1460091 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:06:58.209140 1460091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:06:58.223118 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:58.377311 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:58.393265 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:58.552870 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:58.629965 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:06:58.630008 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:06:58.758806 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:06:58.758842 1460091 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:06:58.850972 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:58.851001 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:06:59.026553 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:06:59.026591 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:06:59.029024 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:06:59.029049 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:06:59.194467 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:59.225323 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:06:59.225365 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:06:59.275081 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.275114 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:06:59.277525 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:59.360873 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:06:59.360922 1460091 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:06:59.365441 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:06:59.365473 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:06:59.479182 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:06:59.479223 1460091 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:06:59.632112 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.730609 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:06:59.730651 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:06:59.924237 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:06:59.924273 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:06:59.952744 1460091 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:06:59.952779 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:07:00.295758 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:07:00.295801 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:07:00.609047 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:00.711006 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:07:01.077427 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:07:01.077478 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:07:01.488779 1460091 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.786858112s)
	I0630 14:07:01.488824 1460091 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:07:01.488851 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.650181319s)
	I0630 14:07:01.488917 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.488939 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489367 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489386 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.489398 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.489407 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489675 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489692 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.519482 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:07:01.519507 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:07:01.953943 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:07:01.953981 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:07:02.000299 1460091 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-412730" context rescaled to 1 replicas
	I0630 14:07:02.634511 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:07:02.634547 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:07:03.286523 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:07:03.286560 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:07:03.817225 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:03.817256 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:07:04.096118 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:07:04.096173 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:04.099962 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100533 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:04.100570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100887 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:04.101144 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:04.101379 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:04.101559 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:04.500309 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:05.218352 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:07:05.643348 1460091 addons.go:238] Setting addon gcp-auth=true in "addons-412730"
	I0630 14:07:05.643433 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:07:05.643934 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.643986 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.660744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0630 14:07:05.661458 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.662215 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.662238 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.662683 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.663335 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.663379 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.682214 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0630 14:07:05.683058 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.683766 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.683791 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.684301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.684542 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:07:05.686376 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:07:05.686632 1460091 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:07:05.686663 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:05.690202 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.690836 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:05.690876 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.691075 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:05.691278 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:05.691467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:05.691655 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:11.565837 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.693634263s)
	I0630 14:07:11.565899 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.565914 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.565980 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.570295044s)
	I0630 14:07:11.566027 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.499537s)
	I0630 14:07:11.566089 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.470173071s)
	I0630 14:07:11.566122 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566098 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566168 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566176 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.42868021s)
	I0630 14:07:11.566202 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566212 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566039 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566229 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566137 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566252 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566260 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566283 1460091 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (13.357116893s)
	I0630 14:07:11.566302 1460091 api_server.go:72] duration metric: took 14.670334608s to wait for apiserver process to appear ...
	I0630 14:07:11.566309 1460091 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:07:11.566329 1460091 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0630 14:07:11.566328 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (13.343175575s)
	I0630 14:07:11.566350 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566360 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566359 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (13.189016834s)
	I0630 14:07:11.566380 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566389 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566439 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566447 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566456 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566462 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566686 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566727 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566737 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566745 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566773 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566782 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566789 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566794 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566839 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566844 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.173547374s)
	I0630 14:07:11.566862 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566868 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566871 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566874 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567113 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567151 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567170 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567176 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567183 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567190 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567203 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567217 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567249 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567258 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567271 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567282 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567329 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567335 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567250 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567548 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567578 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567585 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567976 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568014 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568021 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.568825 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568856 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568865 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569293 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.016393005s)
	I0630 14:07:11.569320 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569328 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569412 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.374918327s)
	I0630 14:07:11.569425 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569431 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569478 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.291926439s)
	I0630 14:07:11.569490 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569497 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569593 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.937451446s)
	I0630 14:07:11.569615 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569624 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569735 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.960641721s)
	W0630 14:07:11.569757 1460091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569775 1460091 retry.go:31] will retry after 330.589533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569820 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.858779326s)
	I0630 14:07:11.569834 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569841 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570507 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570534 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570540 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570547 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570552 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570841 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570867 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570873 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570879 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570884 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570993 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571027 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571032 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571041 1460091 addons.go:479] Verifying addon metrics-server=true in "addons-412730"
	I0630 14:07:11.571778 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571807 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571816 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571823 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.571830 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.571917 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572331 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572343 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572353 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.572362 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.572758 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572789 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572797 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572807 1460091 addons.go:479] Verifying addon ingress=true in "addons-412730"
	I0630 14:07:11.573202 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573214 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573223 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573229 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573243 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573257 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573283 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573302 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573308 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573315 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573321 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573502 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573535 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573568 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573586 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573947 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573962 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573971 1460091 addons.go:479] Verifying addon registry=true in "addons-412730"
	I0630 14:07:11.574975 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575195 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.575240 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575258 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575424 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575449 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.574703 1460091 out.go:177] * Verifying ingress addon...
	I0630 14:07:11.574951 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.576902 1460091 out.go:177] * Verifying registry addon...
	I0630 14:07:11.577803 1460091 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-412730 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:07:11.578734 1460091 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:07:11.579547 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:07:11.618799 1460091 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0630 14:07:11.642386 1460091 api_server.go:141] control plane version: v1.33.2
	I0630 14:07:11.642428 1460091 api_server.go:131] duration metric: took 76.109211ms to wait for apiserver health ...
	I0630 14:07:11.642442 1460091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:07:11.648379 1460091 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:07:11.648411 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:11.648426 1460091 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:07:11.648448 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:11.787935 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.787961 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.788293 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.788355 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:07:11.788482 1460091 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:07:11.788776 1460091 system_pods.go:59] 17 kube-system pods found
	I0630 14:07:11.788844 1460091 system_pods.go:61] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.788873 1460091 system_pods.go:61] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.788883 1460091 system_pods.go:61] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.788891 1460091 system_pods.go:61] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.788902 1460091 system_pods.go:61] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.788912 1460091 system_pods.go:61] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.788923 1460091 system_pods.go:61] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.788933 1460091 system_pods.go:61] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.788941 1460091 system_pods.go:61] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.788951 1460091 system_pods.go:61] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.788965 1460091 system_pods.go:61] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.788979 1460091 system_pods.go:61] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.788992 1460091 system_pods.go:61] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.789005 1460091 system_pods.go:61] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.789017 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789029 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789036 1460091 system_pods.go:61] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.789049 1460091 system_pods.go:74] duration metric: took 146.59926ms to wait for pod list to return data ...
	I0630 14:07:11.789066 1460091 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:07:11.852937 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.852969 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.853375 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.853431 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.853445 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.859436 1460091 default_sa.go:45] found service account: "default"
	I0630 14:07:11.859476 1460091 default_sa.go:55] duration metric: took 70.393128ms for default service account to be created ...
	I0630 14:07:11.859487 1460091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:07:11.900655 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:11.926835 1460091 system_pods.go:86] 18 kube-system pods found
	I0630 14:07:11.926878 1460091 system_pods.go:89] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.926886 1460091 system_pods.go:89] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.926914 1460091 system_pods.go:89] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.926919 1460091 system_pods.go:89] "csi-hostpathplugin-z9jlw" [9852b523-2f8d-4c9a-85e8-7ac58ed5eebb] Pending
	I0630 14:07:11.926925 1460091 system_pods.go:89] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.926931 1460091 system_pods.go:89] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.926940 1460091 system_pods.go:89] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.926949 1460091 system_pods.go:89] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.926958 1460091 system_pods.go:89] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.926966 1460091 system_pods.go:89] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.926977 1460091 system_pods.go:89] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.926990 1460091 system_pods.go:89] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.927011 1460091 system_pods.go:89] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.927030 1460091 system_pods.go:89] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.927042 1460091 system_pods.go:89] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.927050 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927061 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927074 1460091 system_pods.go:89] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.927089 1460091 system_pods.go:126] duration metric: took 67.593682ms to wait for k8s-apps to be running ...
	I0630 14:07:11.927104 1460091 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:07:11.927169 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:07:12.193770 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.193803 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.354834 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.854466413s)
	I0630 14:07:12.354924 1460091 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.668263946s)
	I0630 14:07:12.354926 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355156 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355521 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.355577 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.355605 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:12.355625 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355646 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355981 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.356003 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.356015 1460091 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:07:12.356885 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:07:12.357715 1460091 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:07:12.359034 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:07:12.359721 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:07:12.360023 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:07:12.360041 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:07:12.406216 1460091 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:07:12.406263 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:12.559364 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:07:12.559403 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:07:12.584643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.585219 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.665811 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.665844 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:07:12.836140 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.865786 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.084231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.084272 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.365331 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.585910 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.586224 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.635029 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.734314641s)
	I0630 14:07:13.635075 1460091 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.707884059s)
	I0630 14:07:13.635092 1460091 system_svc.go:56] duration metric: took 1.707986766s WaitForService to wait for kubelet
	I0630 14:07:13.635101 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635119 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635108 1460091 kubeadm.go:578] duration metric: took 16.739135366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:07:13.635141 1460091 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:07:13.635462 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635484 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.635497 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635507 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635808 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635828 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.638761 1460091 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:07:13.638792 1460091 node_conditions.go:123] node cpu capacity is 2
	I0630 14:07:13.638809 1460091 node_conditions.go:105] duration metric: took 3.661934ms to run NodePressure ...
	I0630 14:07:13.638826 1460091 start.go:241] waiting for startup goroutines ...
	I0630 14:07:13.875752 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.024111 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.187911729s)
	I0630 14:07:14.024195 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024227 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024586 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.024683 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.024691 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.024702 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024712 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024994 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.025013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.025043 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.026382 1460091 addons.go:479] Verifying addon gcp-auth=true in "addons-412730"
	I0630 14:07:14.029054 1460091 out.go:177] * Verifying gcp-auth addon...
	I0630 14:07:14.031483 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:07:14.064027 1460091 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:07:14.064055 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.100781 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.114141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.365832 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.534739 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.583821 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.584016 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.864558 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.035462 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.083316 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.083872 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.363154 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.536843 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.584338 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.585465 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.864842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.084017 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.084651 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.497202 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.537408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.584546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.587004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.863546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.082833 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.083256 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.367136 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.536257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.583638 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.584977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.896589 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.083625 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.084228 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.363753 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.535354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.583096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.583122 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.955635 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.083049 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.083420 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.364160 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.536108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.582458 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.583611 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.862653 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.034233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.082846 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.083682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.364310 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.535698 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.583894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.583979 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.863445 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.036429 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.084981 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.085104 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.363349 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.706174 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.707208 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.707678 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.865772 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.035893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.083199 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.084016 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.364233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.535367 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.583354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.583535 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.865792 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.035789 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.136995 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.137134 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.363626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.535937 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.582498 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.583466 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.864738 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.034476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.083541 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.084048 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.364616 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.536239 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.583008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.583026 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.864935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.082940 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.083056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.363774 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.534897 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.583749 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.583954 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.863865 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.034706 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.084015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:26.084175 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.363040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.536862 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.583797 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.583943 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.189951 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.190109 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.190223 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.191199 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.366231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.535516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.584025 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.584989 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.864198 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.037431 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.082788 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.083975 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.363252 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.535710 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.583888 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.584004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.864040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.034895 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.082915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.083605 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.363381 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.535032 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.582676 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.583815 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.865439 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.036869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.084069 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.084108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.364800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.535912 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.583840 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.585080 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.864767 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.044830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.084386 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.084487 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.364893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.623955 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.624096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.625461 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.863871 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.035869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.085127 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.086207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.373662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.539255 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.587456 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.588975 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.863384 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.037175 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.083368 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.086594 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.535971 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.582220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.583079 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.864086 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.035104 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.087614 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.090507 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.364243 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.535472 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.582842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.583065 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.864351 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.038245 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.083459 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.083968 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.364140 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.535203 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.583507 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.583504 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.864421 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.035870 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.082290 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.083322 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.363896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.536935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.592002 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.592024 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.867249 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.035497 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.082561 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.083545 1460091 kapi.go:107] duration metric: took 25.503987228s to wait for kubernetes.io/minikube-addons=registry ...
	I0630 14:07:37.364896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.535915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.582416 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.863882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.035195 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.084077 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.363908 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.536012 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.582871 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.865977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.036008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.083221 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.366301 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.537043 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.584445 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.864115 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.035178 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.082503 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.364953 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.539118 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.582790 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.920318 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.039974 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.140897 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:41.363490 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.536671 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.584110 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151839 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.151893 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151941 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.364151 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.535860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.637454 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.869058 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.034755 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.083141 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.365516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.539831 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.585574 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.867882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.035437 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.083399 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.364009 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.534997 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.582616 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.865028 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.034987 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.083033 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.363797 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.536061 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.582192 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.863930 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.035610 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.082940 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.363183 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.536317 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.582800 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.863634 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.035461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.082263 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.364204 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.537008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.638719 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.867382 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.035628 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.082998 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.363676 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.535845 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.583373 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.865933 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.035994 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.082615 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.364741 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.763038 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.763188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.864019 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.034923 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.081789 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:50.363509 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.536302 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.582756 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.084972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.085222 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.088586 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.365037 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.536393 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.583205 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.863948 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.036793 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.083280 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.363764 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.534903 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.582225 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.863489 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.035662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.083237 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.535229 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.582794 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.864519 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.035606 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.083006 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.363649 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.534894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.582432 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.874053 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.036295 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.138176 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.439408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.536289 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.583387 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.877077 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.038681 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.088650 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.364716 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.537099 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.638302 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.888274 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.065461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.082558 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.364271 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.537383 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.584203 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.864829 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.035093 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.082842 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.368712 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.536145 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.583188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.864081 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.035171 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.082395 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.363881 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.770427 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.775289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.886727 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.036389 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.138257 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.365066 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.543394 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.587828 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.862860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.045510 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.084722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.370626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.543476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.643717 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.863100 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.036395 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.083306 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.364022 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.536447 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.582849 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.863402 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.043769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.084338 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.364984 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.537068 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.583105 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.873833 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.064570 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.165207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.363705 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.534655 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.582773 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.865214 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.040132 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.082101 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.364071 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.535996 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.583847 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.864830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.035167 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.082727 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.364040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.536325 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.584424 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.867769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:07.035374 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.085873 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:07.363748 1460091 kapi.go:107] duration metric: took 55.004020875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0630 14:08:07.535663 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.583300 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.036340 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.083025 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.537501 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.583289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.035787 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.083288 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.536861 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.895410 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.036972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.103056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.537875 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.583172 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.036116 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.082706 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.537110 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.583096 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.035141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.083220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.535683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.583269 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.035346 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.085856 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.535419 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.584214 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.086182 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.538450 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.584164 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.035469 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.082710 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.535978 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.584976 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.035643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.083354 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.536216 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.582722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.036015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.082827 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.535105 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.582197 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.036044 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.082594 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.535731 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.636867 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.040011 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.084634 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.538800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.584691 1460091 kapi.go:107] duration metric: took 1m8.005950872s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:08:20.046904 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:20.544735 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.045744 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.545748 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.039630 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.538370 1460091 kapi.go:107] duration metric: took 1m8.506886725s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:08:22.539980 1460091 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-412730 cluster.
	I0630 14:08:22.541245 1460091 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:08:22.542490 1460091 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:08:22.544085 1460091 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, volcano, inspektor-gadget, registry-creds, cloud-spanner, metrics-server, ingress-dns, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0630 14:08:22.545451 1460091 addons.go:514] duration metric: took 1m25.649456906s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin volcano inspektor-gadget registry-creds cloud-spanner metrics-server ingress-dns storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0630 14:08:22.545505 1460091 start.go:246] waiting for cluster config update ...
	I0630 14:08:22.545527 1460091 start.go:255] writing updated cluster config ...
	I0630 14:08:22.545830 1460091 ssh_runner.go:195] Run: rm -f paused
	I0630 14:08:22.552874 1460091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:22.645593 1460091 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.650587 1460091 pod_ready.go:94] pod "coredns-674b8bbfcf-55nn4" is "Ready"
	I0630 14:08:22.650616 1460091 pod_ready.go:86] duration metric: took 4.992795ms for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.653714 1460091 pod_ready.go:83] waiting for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.658042 1460091 pod_ready.go:94] pod "etcd-addons-412730" is "Ready"
	I0630 14:08:22.658066 1460091 pod_ready.go:86] duration metric: took 4.323836ms for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.660310 1460091 pod_ready.go:83] waiting for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.664410 1460091 pod_ready.go:94] pod "kube-apiserver-addons-412730" is "Ready"
	I0630 14:08:22.664433 1460091 pod_ready.go:86] duration metric: took 4.099276ms for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.666354 1460091 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.958219 1460091 pod_ready.go:94] pod "kube-controller-manager-addons-412730" is "Ready"
	I0630 14:08:22.958253 1460091 pod_ready.go:86] duration metric: took 291.880924ms for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.158459 1460091 pod_ready.go:83] waiting for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.557555 1460091 pod_ready.go:94] pod "kube-proxy-mgntr" is "Ready"
	I0630 14:08:23.557587 1460091 pod_ready.go:86] duration metric: took 399.092549ms for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.758293 1460091 pod_ready.go:83] waiting for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157033 1460091 pod_ready.go:94] pod "kube-scheduler-addons-412730" is "Ready"
	I0630 14:08:24.157070 1460091 pod_ready.go:86] duration metric: took 398.746217ms for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157088 1460091 pod_ready.go:40] duration metric: took 1.604151264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:24.206500 1460091 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:08:24.208969 1460091 out.go:177] * Done! kubectl is now configured to use "addons-412730" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	46e9c486237cc       56cc512116c8f       8 minutes ago       Running             busybox                   0                   5b8f43d306a71       busybox
	a41e1f5d78ba3       158e2f2d90f21       14 minutes ago      Running             controller                0                   ad79beda1cd96       ingress-nginx-controller-67687b59dd-vvcrv
	2c3efa502f6ac       0ea86a0862033       15 minutes ago      Exited              patch                     0                   479724e3cf758       ingress-nginx-admission-patch-fl6cb
	8ff6da260516f       0ea86a0862033       15 minutes ago      Exited              create                    0                   104d25c1177d7       ingress-nginx-admission-create-gpszb
	2618e4dc11783       30dd67412fdea       15 minutes ago      Running             minikube-ingress-dns      0                   0fd95f2b44624       kube-ingress-dns-minikube
	811184505fb18       d5e667c0f2bb6       15 minutes ago      Running             amd-gpu-device-plugin     0                   b44acdeabc7e9       amd-gpu-device-plugin-jk4pf
	60e507365f1d3       6e38f40d628db       16 minutes ago      Running             storage-provisioner       0                   c81c97cad8c5e       storage-provisioner
	8e1e019f61b20       1cf5f116067c6       16 minutes ago      Running             coredns                   0                   f0e3a5c4dc1ba       coredns-674b8bbfcf-55nn4
	e9d272ef95cc8       661d404f36f01       16 minutes ago      Running             kube-proxy                0                   ec083bc9ceaf6       kube-proxy-mgntr
	cda40c61e5780       cfed1ff748928       16 minutes ago      Running             kube-scheduler            0                   8b62447a9ffbc       kube-scheduler-addons-412730
	0f5bd8617276d       ee794efa53d85       16 minutes ago      Running             kube-apiserver            0                   296d470d26007       kube-apiserver-addons-412730
	ed722ba732c02       ff4f56c76b82d       16 minutes ago      Running             kube-controller-manager   0                   6de0b1c4abb94       kube-controller-manager-addons-412730
	0aa8fdef51063       499038711c081       16 minutes ago      Running             etcd                      0                   2ea511d5408a9       etcd-addons-412730
	
	
	==> containerd <==
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.393402078Z" level=info msg="RemovePodSandbox \"b4fec9a2b5ea5e656a9fe84abcf925b349e7fc5fcbfdc29dcb3def305c2649f4\" returns successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.394382523Z" level=info msg="StopPodSandbox for \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.422195521Z" level=info msg="TearDown network for sandbox \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\" successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.422336281Z" level=info msg="StopPodSandbox for \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\" returns successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.423320512Z" level=info msg="RemovePodSandbox for \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.423487455Z" level=info msg="Forcibly stopping sandbox \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.448264691Z" level=info msg="TearDown network for sandbox \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\" successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.454633780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.454710050Z" level=info msg="RemovePodSandbox \"1b37be17df7f2d32ba1e2dfeb14eda8bdccc7823d58f2c32b9b440984f2e23b9\" returns successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.455324925Z" level=info msg="StopPodSandbox for \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.482733884Z" level=info msg="TearDown network for sandbox \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\" successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.482778817Z" level=info msg="StopPodSandbox for \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\" returns successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.483672169Z" level=info msg="RemovePodSandbox for \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.483771390Z" level=info msg="Forcibly stopping sandbox \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.509384441Z" level=info msg="TearDown network for sandbox \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\" successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.516285877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.516511638Z" level=info msg="RemovePodSandbox \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\" returns successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.519617406Z" level=info msg="StopPodSandbox for \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.548803114Z" level=info msg="TearDown network for sandbox \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\" successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.548939056Z" level=info msg="StopPodSandbox for \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\" returns successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.549977611Z" level=info msg="RemovePodSandbox for \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.550025746Z" level=info msg="Forcibly stopping sandbox \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\""
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.574579958Z" level=info msg="TearDown network for sandbox \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\" successfully"
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.580804267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jun 30 14:21:54 addons-412730 containerd[860]: time="2025-06-30T14:21:54.580896675Z" level=info msg="RemovePodSandbox \"6f9489fdc42359ca8c8ff792e4736982c7ea2d8261b457ebe7e066e191f63633\" returns successfully"
	
	
	==> coredns [8e1e019f61b2004e8815ddbaf9eb6f733467fc8a79bd77196bc0c76b85b8b99c] <==
	[INFO] 10.244.0.7:37816 - 48483 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00020548s
	[INFO] 10.244.0.7:37816 - 18283 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000160064s
	[INFO] 10.244.0.7:37816 - 57759 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000505163s
	[INFO] 10.244.0.7:37816 - 2367 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000121216s
	[INFO] 10.244.0.7:37816 - 32941 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000407687s
	[INFO] 10.244.0.7:37816 - 38124 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00021235s
	[INFO] 10.244.0.7:37816 - 42370 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000448784s
	[INFO] 10.244.0.7:49788 - 53103 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191609s
	[INFO] 10.244.0.7:49788 - 52743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161724s
	[INFO] 10.244.0.7:59007 - 35302 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000389724s
	[INFO] 10.244.0.7:59007 - 35035 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000520532s
	[INFO] 10.244.0.7:46728 - 65447 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133644s
	[INFO] 10.244.0.7:46728 - 65148 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00061652s
	[INFO] 10.244.0.7:50533 - 14727 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000567642s
	[INFO] 10.244.0.7:50533 - 14481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000783618s
	[INFO] 10.244.0.27:51053 - 48711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000523898s
	[INFO] 10.244.0.27:40917 - 60785 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000642215s
	[INFO] 10.244.0.27:35189 - 63805 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096026s
	[INFO] 10.244.0.27:43478 - 6990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00040325s
	[INFO] 10.244.0.27:53994 - 15788 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170635s
	[INFO] 10.244.0.27:51155 - 39553 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128149s
	[INFO] 10.244.0.27:37346 - 35756 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001274741s
	[INFO] 10.244.0.27:38294 - 56651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000805113s
	[INFO] 10.244.0.31:54260 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000711267s
	[INFO] 10.244.0.31:46467 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124471s
	
	
	==> describe nodes <==
	Name:               addons-412730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-412730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-412730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-412730
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:06:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-412730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:23:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    addons-412730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc9448cb8b5448fc9151301fb29bc0cd
	  System UUID:                bc9448cb-8b54-48fc-9151-301fb29bc0cd
	  Boot ID:                    6141a1b2-f9ea-4f8f-bc9e-ef270348f968
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m35s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  ingress-nginx               ingress-nginx-controller-67687b59dd-vvcrv    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         16m
	  kube-system                 amd-gpu-device-plugin-jk4pf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-674b8bbfcf-55nn4                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-addons-412730                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-412730                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-412730        200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-mgntr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-412730                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m                kubelet          Node addons-412730 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node addons-412730 event: Registered Node addons-412730 in Controller
	
	
	==> dmesg <==
	[  +4.862777] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.721987] kauditd_printk_skb: 3 callbacks suppressed
	[  +3.179109] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.932449] kauditd_printk_skb: 47 callbacks suppressed
	[  +4.007047] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.735579] kauditd_printk_skb: 26 callbacks suppressed
	[Jun30 14:08] kauditd_printk_skb: 76 callbacks suppressed
	[  +4.704545] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.836614] kauditd_printk_skb: 61 callbacks suppressed
	[Jun30 14:09] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:10] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:13] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:14] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 19 callbacks suppressed
	[ +11.983780] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.925929] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:15] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.009854] kauditd_printk_skb: 28 callbacks suppressed
	[  +1.375797] kauditd_printk_skb: 61 callbacks suppressed
	[  +3.058612] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.836555] kauditd_printk_skb: 9 callbacks suppressed
	[Jun30 14:17] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:19] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:20] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [0aa8fdef5106381a33bf7fae10904caa793ace481cae1d43127914ffe86d49ff] <==
	{"level":"info","ts":"2025-06-30T14:07:49.751590Z","caller":"traceutil/trace.go:171","msg":"trace[559772973] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"267.154952ms","start":"2025-06-30T14:07:49.483661Z","end":"2025-06-30T14:07:49.750816Z","steps":["trace[559772973] 'process raft request'  (duration: 266.932951ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:07:49.752866Z","caller":"traceutil/trace.go:171","msg":"trace[154741241] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1203; }","duration":"176.571713ms","start":"2025-06-30T14:07:49.576287Z","end":"2025-06-30T14:07:49.752858Z","steps":["trace[154741241] 'agreement among raft nodes before linearized reading'  (duration: 176.438082ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.060101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.201972ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156627244712664246 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" value_size:707 lease:3156627244712664233 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-06-30T14:07:51.060508Z","caller":"traceutil/trace.go:171","msg":"trace[1403560008] linearizableReadLoop","detail":"{readStateIndex:1246; appliedIndex:1245; }","duration":"269.602891ms","start":"2025-06-30T14:07:50.790891Z","end":"2025-06-30T14:07:51.060494Z","steps":["trace[1403560008] 'read index received'  (duration: 53.900301ms)","trace[1403560008] 'applied index is now lower than readState.Index'  (duration: 215.701517ms)"],"step_count":2}
	{"level":"info","ts":"2025-06-30T14:07:51.060687Z","caller":"traceutil/trace.go:171","msg":"trace[1928328932] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"282.940847ms","start":"2025-06-30T14:07:50.777737Z","end":"2025-06-30T14:07:51.060678Z","steps":["trace[1928328932] 'process raft request'  (duration: 67.101901ms)","trace[1928328932] 'compare'  (duration: 214.876695ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T14:07:51.060917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.674634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.060970Z","caller":"traceutil/trace.go:171","msg":"trace[1908369901] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1214; }","duration":"254.762861ms","start":"2025-06-30T14:07:50.806198Z","end":"2025-06-30T14:07:51.060961Z","steps":["trace[1908369901] 'agreement among raft nodes before linearized reading'  (duration: 254.494296ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.462832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb\" limit:1 ","response":"range_response_count:1 size:4215"}
	{"level":"info","ts":"2025-06-30T14:07:51.061377Z","caller":"traceutil/trace.go:171","msg":"trace[1518962383] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb; range_end:; response_count:1; response_revision:1214; }","duration":"270.575777ms","start":"2025-06-30T14:07:50.790792Z","end":"2025-06-30T14:07:51.061368Z","steps":["trace[1518962383] 'agreement among raft nodes before linearized reading'  (duration: 270.487611ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.960425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.062418Z","caller":"traceutil/trace.go:171","msg":"trace[621823114] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1214; }","duration":"205.559852ms","start":"2025-06-30T14:07:50.856769Z","end":"2025-06-30T14:07:51.062329Z","steps":["trace[621823114] 'agreement among raft nodes before linearized reading'  (duration: 204.992694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:55.431218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.529916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:55.431286Z","caller":"traceutil/trace.go:171","msg":"trace[1840291804] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1254; }","duration":"185.638229ms","start":"2025-06-30T14:07:55.245637Z","end":"2025-06-30T14:07:55.431275Z","steps":["trace[1840291804] 'count revisions from in-memory index tree'  (duration: 185.483282ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.760814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.563816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.761810Z","caller":"traceutil/trace.go:171","msg":"trace[1037456471] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"232.616347ms","start":"2025-06-30T14:07:59.529177Z","end":"2025-06-30T14:07:59.761793Z","steps":["trace[1037456471] 'range keys from in-memory index tree'  (duration: 231.18055ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.762324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.982539ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.762383Z","caller":"traceutil/trace.go:171","msg":"trace[856262130] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1289; }","duration":"197.052432ms","start":"2025-06-30T14:07:59.565321Z","end":"2025-06-30T14:07:59.762373Z","steps":["trace[856262130] 'range keys from in-memory index tree'  (duration: 196.924905ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.767749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.524873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.767792Z","caller":"traceutil/trace.go:171","msg":"trace[2033650698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"189.645425ms","start":"2025-06-30T14:07:59.578136Z","end":"2025-06-30T14:07:59.767782Z","steps":["trace[2033650698] 'range keys from in-memory index tree'  (duration: 183.005147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:16:47.709200Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1900}
	{"level":"info","ts":"2025-06-30T14:16:47.874708Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1900,"took":"164.330155ms","hash":2534900505,"current-db-size-bytes":12238848,"current-db-size":"12 MB","current-db-size-in-use-bytes":7974912,"current-db-size-in-use":"8.0 MB"}
	{"level":"info","ts":"2025-06-30T14:16:47.875273Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2534900505,"revision":1900,"compact-revision":-1}
	{"level":"info","ts":"2025-06-30T14:21:47.719758Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2908}
	{"level":"info","ts":"2025-06-30T14:21:47.749244Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":2908,"took":"27.993028ms","hash":1771088741,"current-db-size-bytes":12238848,"current-db-size":"12 MB","current-db-size-in-use-bytes":6393856,"current-db-size-in-use":"6.4 MB"}
	{"level":"info","ts":"2025-06-30T14:21:47.749384Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1771088741,"revision":2908,"compact-revision":1900}
	
	
	==> kernel <==
	 14:23:13 up 17 min,  0 users,  load average: 0.16, 0.36, 0.39
	Linux addons-412730 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0f5bd8617276d56b4d1c938db3290f5057a6076ca2a1ff6b72007428d9958a0f] <==
	I0630 14:15:08.183782       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:11.441632       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:11.868485       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:15:12.083379       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.15.45"}
	I0630 14:15:12.087255       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:16.776061       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:19.939310       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0630 14:15:20.985204       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0630 14:15:31.545392       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:42.030628       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:16:49.559945       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:21:09.225982       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0630 14:21:09.226040       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0630 14:21:09.250314       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0630 14:21:09.250384       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0630 14:21:09.275988       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0630 14:21:09.276063       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0630 14:21:09.336364       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0630 14:21:09.336425       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0630 14:21:09.348868       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0630 14:21:09.348927       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0630 14:21:10.337281       1 cacher.go:183] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0630 14:21:10.349404       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0630 14:21:10.442637       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0630 14:21:10.473152       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ed722ba732c0211e772331fd643a8e48e5ef0b8cd4b82f97d3a5d69b9aa30756] <==
	E0630 14:22:10.515957       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:11.510786       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:22:13.169965       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:20.549587       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:20.644282       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:25.062037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:26.318726       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:26.511823       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:22:33.478378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:37.752415       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:39.537357       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:39.674894       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:41.512258       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:22:43.982375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:45.546870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:46.034225       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:56.512410       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:22:57.110127       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:22:58.577551       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:23:07.201346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:23:08.991398       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:23:11.512921       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:23:12.520702       1 csi_attacher.go:522] kubernetes.io/csi: Attach timeout after 2m0s [volume=9ea8c774-55bc-11f0-a358-9232e811893c; attachment.ID=csi-9288beecb48622936dab73617f74eeb3ebb8da138c2351b6094073bf2e406aae]
	E0630 14:23:12.521249       1 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^9ea8c774-55bc-11f0-a358-9232e811893c podName: nodeName:}" failed. No retries permitted until 2025-06-30 14:23:13.021205994 +0000 UTC m=+986.305992809 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume "pvc-f9cc5716-bb8f-487f-9ca7-ed8bc01ee668" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^9ea8c774-55bc-11f0-a358-9232e811893c") from node "addons-412730" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 9ea8c774-55bc-11f0-a358-9232e811893c
	I0630 14:23:13.113257       1 reconciler.go:360] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^9ea8c774-55bc-11f0-a358-9232e811893c" nodeName="addons-412730" scheduledPods=["default/task-pv-pod"]
	
	
	==> kube-proxy [e9d272ef95cc8f73e12d5cc59f4966731013d924126fc8eb0bd96e6acc623f27] <==
	E0630 14:06:58.349607       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:06:58.396678       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E0630 14:06:58.396782       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:06:58.682235       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:06:58.682289       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:06:58.682317       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:06:58.729336       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:06:58.729702       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:06:58.729714       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:06:58.747265       1 config.go:199] "Starting service config controller"
	I0630 14:06:58.747303       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:06:58.747324       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:06:58.747328       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:06:58.747339       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:06:58.747342       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:06:58.747357       1 config.go:329] "Starting node config controller"
	I0630 14:06:58.747360       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:06:58.847644       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 14:06:58.847708       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:06:58.847734       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:06:58.848003       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cda40c61e5780477d5a234f04d425f2347a784973443632c68938aea16f474e6] <==
	E0630 14:06:49.633867       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:06:49.633920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:06:49.634247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:49.636896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:49.637563       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:49.637783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:06:49.638039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:49.638190       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:49.638365       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:06:49.638496       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:06:49.638609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:06:49.638719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:49.638999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.551259       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.618504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:50.628999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 14:06:50.679571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:50.702747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:06:50.708224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:50.796622       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:06:50.797647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:50.806980       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:50.808489       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:50.967143       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0630 14:06:53.415169       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:21:25 addons-412730 kubelet[1571]: E0630 14:21:25.443914    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:21:26 addons-412730 kubelet[1571]: E0630 14:21:26.443020    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:21:36 addons-412730 kubelet[1571]: E0630 14:21:36.444121    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:21:37 addons-412730 kubelet[1571]: E0630 14:21:37.443237    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:21:42 addons-412730 kubelet[1571]: I0630 14:21:42.443807    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jk4pf" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:21:51 addons-412730 kubelet[1571]: E0630 14:21:51.443806    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:21:52 addons-412730 kubelet[1571]: E0630 14:21:52.444502    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:21:54 addons-412730 kubelet[1571]: I0630 14:21:54.163351    1571 scope.go:117] "RemoveContainer" containerID="b61ad9d665eb612a006ba297556e0d667f24dd3c92b29a156223bdb5eb9a33ea"
	Jun 30 14:21:54 addons-412730 kubelet[1571]: I0630 14:21:54.171771    1571 scope.go:117] "RemoveContainer" containerID="dca6ca157e955030b92b800423ad5898923a589c6b4d87999e827a8befb47054"
	Jun 30 14:21:55 addons-412730 kubelet[1571]: I0630 14:21:55.442938    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:22:02 addons-412730 kubelet[1571]: E0630 14:22:02.444213    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:22:03 addons-412730 kubelet[1571]: E0630 14:22:03.443278    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:22:14 addons-412730 kubelet[1571]: E0630 14:22:14.443108    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:22:15 addons-412730 kubelet[1571]: E0630 14:22:15.442813    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:22:25 addons-412730 kubelet[1571]: E0630 14:22:25.443555    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:22:29 addons-412730 kubelet[1571]: E0630 14:22:29.443497    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:22:37 addons-412730 kubelet[1571]: E0630 14:22:37.444909    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:22:40 addons-412730 kubelet[1571]: E0630 14:22:40.443254    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:22:46 addons-412730 kubelet[1571]: I0630 14:22:46.442834    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jk4pf" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:22:51 addons-412730 kubelet[1571]: W0630 14:22:51.254156    1571 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Jun 30 14:22:51 addons-412730 kubelet[1571]: E0630 14:22:51.443057    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:22:54 addons-412730 kubelet[1571]: E0630 14:22:54.443532    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:23:05 addons-412730 kubelet[1571]: E0630 14:23:05.443216    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:23:09 addons-412730 kubelet[1571]: E0630 14:23:09.443152    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:23:11 addons-412730 kubelet[1571]: I0630 14:23:11.442414    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [60e507365f1d30c7beac2979b93ea374fc72f0bcfb17244185c70d7ea0c4da2b] <==
	W0630 14:22:49.131283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:51.134600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:51.143280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:53.147129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:53.155076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:55.159575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:55.165534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:57.169791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:57.178957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:59.182324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:22:59.188215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:01.191931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:01.199883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:03.203224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:03.210242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:05.214988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:05.222942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:07.226923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:07.234563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:09.238350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:09.247077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:11.250772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:11.256335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:13.260557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:23:13.266365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412730 -n addons-412730
helpers_test.go:261: (dbg) Run:  kubectl --context addons-412730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb: exit status 1 (89.849652ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-412730/192.168.39.114
	Start Time:       Mon, 30 Jun 2025 14:15:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpjf9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tpjf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m2s                   default-scheduler  Successfully assigned default/nginx to addons-412730
	  Warning  Failed     8m2s                   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m7s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m7s (x5 over 8m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m7s (x4 over 7m46s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m58s (x20 over 8m1s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m43s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-412730/192.168.39.114
	Start Time:       Mon, 30 Jun 2025 14:15:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgbht (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-vgbht:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                    From                     Message
	  ----     ------              ----                   ----                     -------
	  Normal   Scheduled           8m8s                   default-scheduler        Successfully assigned default/task-pv-pod to addons-412730
	  Normal   Pulling             5m16s (x5 over 8m8s)   kubelet                  Pulling image "docker.io/nginx"
	  Warning  Failed              5m16s (x5 over 8m8s)   kubelet                  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              5m16s (x5 over 8m8s)   kubelet                  Error: ErrImagePull
	  Warning  Failed              3m (x20 over 8m7s)     kubelet                  Error: ImagePullBackOff
	  Normal   BackOff             2m49s (x21 over 8m7s)  kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  FailedAttachVolume  2s                     attachdetach-controller  AttachVolume.Attach failed for volume "pvc-f9cc5716-bb8f-487f-9ca7-ed8bc01ee668" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 9ea8c774-55bc-11f0-a358-9232e811893c
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmb4n (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jmb4n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gpszb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fl6cb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable ingress-dns --alsologtostderr -v=1: (1.490451827s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable ingress --alsologtostderr -v=1: (7.743250553s)
--- FAIL: TestAddons/parallel/Ingress (492.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (379.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0630 14:14:56.682993 1459494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.237897ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-412730 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-412730 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c47e35d5-df9f-4a6a-a3bf-87072a4de2a0] Pending
helpers_test.go:344: "task-pv-pod" [c47e35d5-df9f-4a6a-a3bf-87072a4de2a0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412730 -n addons-412730
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-06-30 14:21:06.291837374 +0000 UTC m=+915.974964316
addons_test.go:567: (dbg) Run:  kubectl --context addons-412730 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-412730 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-412730/192.168.39.114
Start Time:       Mon, 30 Jun 2025 14:15:06 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgbht (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-vgbht:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-412730
Normal   Pulling    3m8s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m8s (x5 over 6m)     kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m8s (x5 over 6m)     kubelet            Error: ErrImagePull
Warning  Failed     52s (x20 over 5m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    41s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-412730 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-412730 logs task-pv-pod -n default: exit status 1 (75.572705ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-412730 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-412730 -n addons-412730
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 logs -n 25: (1.436855881s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-083943              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | -o=json --download-only              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | -p download-only-480082              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-083943              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | --download-only -p                   | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | binary-mirror-278166                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42597               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-278166              | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| addons  | disable dashboard -p                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| start   | -p addons-412730 --wait=true         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:08 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | -p addons-412730                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-412730 ip                     | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | configure registry-creds -f          | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | ./testdata/addons_testconfig.json    |                      |         |         |                     |                     |
	|         | -p addons-412730                     |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable registry-creds               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:20 UTC |                     |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:06:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:06:06.240063 1460091 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:06:06.240209 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240221 1460091 out.go:358] Setting ErrFile to fd 2...
	I0630 14:06:06.240225 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240435 1460091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:06:06.241146 1460091 out.go:352] Setting JSON to false
	I0630 14:06:06.242162 1460091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49689,"bootTime":1751242677,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:06:06.242287 1460091 start.go:140] virtualization: kvm guest
	I0630 14:06:06.244153 1460091 out.go:177] * [addons-412730] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:06:06.245583 1460091 notify.go:220] Checking for updates...
	I0630 14:06:06.245617 1460091 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:06:06.246864 1460091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:06:06.248249 1460091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:06.249601 1460091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.251003 1460091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:06:06.252187 1460091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:06:06.253562 1460091 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:06:06.289858 1460091 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:06:06.291153 1460091 start.go:304] selected driver: kvm2
	I0630 14:06:06.291176 1460091 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:06:06.291195 1460091 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:06:06.292048 1460091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.292142 1460091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1452140/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:06:06.309060 1460091 install.go:137] /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:06:06.309119 1460091 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:06:06.309429 1460091 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:06:06.309479 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:06.309532 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:06.309546 1460091 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:06:06.309617 1460091 start.go:347] cluster config:
	{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0630 14:06:06.309739 1460091 iso.go:125] acquiring lock: {Name:mk3f178100d94eda06013511859d36adab64257f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.311683 1460091 out.go:177] * Starting "addons-412730" primary control-plane node in "addons-412730" cluster
	I0630 14:06:06.313225 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:06.313276 1460091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4
	I0630 14:06:06.313292 1460091 cache.go:56] Caching tarball of preloaded images
	I0630 14:06:06.313420 1460091 preload.go:172] Found /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0630 14:06:06.313435 1460091 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on containerd
	I0630 14:06:06.313766 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:06.313798 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json: {Name:mk9a7a41f109a1f3b7b9e5a38a0e2a1bce3a8d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:06.313975 1460091 start.go:360] acquireMachinesLock for addons-412730: {Name:mkb4b5035f5dd19ed6df4556a284e7c795570454 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:06:06.314058 1460091 start.go:364] duration metric: took 65.368µs to acquireMachinesLock for "addons-412730"
	I0630 14:06:06.314084 1460091 start.go:93] Provisioning new machine with config: &{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:06.314172 1460091 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:06:06.316769 1460091 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:06:06.316975 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:06.317044 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:06.332767 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0630 14:06:06.333480 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:06.334061 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:06.334083 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:06.334504 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:06.334801 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:06.335019 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:06.335217 1460091 start.go:159] libmachine.API.Create for "addons-412730" (driver="kvm2")
	I0630 14:06:06.335248 1460091 client.go:168] LocalClient.Create starting
	I0630 14:06:06.335289 1460091 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem
	I0630 14:06:06.483712 1460091 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem
	I0630 14:06:06.592251 1460091 main.go:141] libmachine: Running pre-create checks...
	I0630 14:06:06.592287 1460091 main.go:141] libmachine: (addons-412730) Calling .PreCreateCheck
	I0630 14:06:06.592947 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:06.593668 1460091 main.go:141] libmachine: Creating machine...
	I0630 14:06:06.593697 1460091 main.go:141] libmachine: (addons-412730) Calling .Create
	I0630 14:06:06.594139 1460091 main.go:141] libmachine: (addons-412730) creating KVM machine...
	I0630 14:06:06.594168 1460091 main.go:141] libmachine: (addons-412730) creating network...
	I0630 14:06:06.595936 1460091 main.go:141] libmachine: (addons-412730) DBG | found existing default KVM network
	I0630 14:06:06.596779 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.596550 1460113 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020ef20}
	I0630 14:06:06.596808 1460091 main.go:141] libmachine: (addons-412730) DBG | created network xml: 
	I0630 14:06:06.596818 1460091 main.go:141] libmachine: (addons-412730) DBG | <network>
	I0630 14:06:06.596822 1460091 main.go:141] libmachine: (addons-412730) DBG |   <name>mk-addons-412730</name>
	I0630 14:06:06.596828 1460091 main.go:141] libmachine: (addons-412730) DBG |   <dns enable='no'/>
	I0630 14:06:06.596832 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596839 1460091 main.go:141] libmachine: (addons-412730) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:06:06.596851 1460091 main.go:141] libmachine: (addons-412730) DBG |     <dhcp>
	I0630 14:06:06.596865 1460091 main.go:141] libmachine: (addons-412730) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:06:06.596872 1460091 main.go:141] libmachine: (addons-412730) DBG |     </dhcp>
	I0630 14:06:06.596877 1460091 main.go:141] libmachine: (addons-412730) DBG |   </ip>
	I0630 14:06:06.596883 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596888 1460091 main.go:141] libmachine: (addons-412730) DBG | </network>
	I0630 14:06:06.596897 1460091 main.go:141] libmachine: (addons-412730) DBG | 
	I0630 14:06:06.602938 1460091 main.go:141] libmachine: (addons-412730) DBG | trying to create private KVM network mk-addons-412730 192.168.39.0/24...
	I0630 14:06:06.682845 1460091 main.go:141] libmachine: (addons-412730) DBG | private KVM network mk-addons-412730 192.168.39.0/24 created
	I0630 14:06:06.682892 1460091 main.go:141] libmachine: (addons-412730) setting up store path in /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:06.682905 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.682807 1460113 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.682951 1460091 main.go:141] libmachine: (addons-412730) building disk image from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:06:06.682983 1460091 main.go:141] libmachine: (addons-412730) Downloading /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:06:06.983317 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.983139 1460113 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa...
	I0630 14:06:07.030013 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029839 1460113 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk...
	I0630 14:06:07.030043 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing magic tar header
	I0630 14:06:07.030053 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing SSH key tar header
	I0630 14:06:07.030061 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029966 1460113 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:07.030071 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730
	I0630 14:06:07.030150 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 (perms=drwx------)
	I0630 14:06:07.030175 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:06:07.030186 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines
	I0630 14:06:07.030199 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube (perms=drwxr-xr-x)
	I0630 14:06:07.030230 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140 (perms=drwxrwxr-x)
	I0630 14:06:07.030243 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:06:07.030249 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:07.030257 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140
	I0630 14:06:07.030272 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:06:07.030284 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:06:07.030316 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins
	I0630 14:06:07.030332 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home
	I0630 14:06:07.030374 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:07.030392 1460091 main.go:141] libmachine: (addons-412730) DBG | skipping /home - not owner
	I0630 14:06:07.031398 1460091 main.go:141] libmachine: (addons-412730) define libvirt domain using xml: 
	I0630 14:06:07.031420 1460091 main.go:141] libmachine: (addons-412730) <domain type='kvm'>
	I0630 14:06:07.031429 1460091 main.go:141] libmachine: (addons-412730)   <name>addons-412730</name>
	I0630 14:06:07.031435 1460091 main.go:141] libmachine: (addons-412730)   <memory unit='MiB'>4096</memory>
	I0630 14:06:07.031443 1460091 main.go:141] libmachine: (addons-412730)   <vcpu>2</vcpu>
	I0630 14:06:07.031449 1460091 main.go:141] libmachine: (addons-412730)   <features>
	I0630 14:06:07.031457 1460091 main.go:141] libmachine: (addons-412730)     <acpi/>
	I0630 14:06:07.031472 1460091 main.go:141] libmachine: (addons-412730)     <apic/>
	I0630 14:06:07.031484 1460091 main.go:141] libmachine: (addons-412730)     <pae/>
	I0630 14:06:07.031495 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.031506 1460091 main.go:141] libmachine: (addons-412730)   </features>
	I0630 14:06:07.031515 1460091 main.go:141] libmachine: (addons-412730)   <cpu mode='host-passthrough'>
	I0630 14:06:07.031524 1460091 main.go:141] libmachine: (addons-412730)   
	I0630 14:06:07.031534 1460091 main.go:141] libmachine: (addons-412730)   </cpu>
	I0630 14:06:07.031544 1460091 main.go:141] libmachine: (addons-412730)   <os>
	I0630 14:06:07.031554 1460091 main.go:141] libmachine: (addons-412730)     <type>hvm</type>
	I0630 14:06:07.031563 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='cdrom'/>
	I0630 14:06:07.031572 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='hd'/>
	I0630 14:06:07.031581 1460091 main.go:141] libmachine: (addons-412730)     <bootmenu enable='no'/>
	I0630 14:06:07.031597 1460091 main.go:141] libmachine: (addons-412730)   </os>
	I0630 14:06:07.031609 1460091 main.go:141] libmachine: (addons-412730)   <devices>
	I0630 14:06:07.031619 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='cdrom'>
	I0630 14:06:07.031636 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/boot2docker.iso'/>
	I0630 14:06:07.031647 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hdc' bus='scsi'/>
	I0630 14:06:07.031659 1460091 main.go:141] libmachine: (addons-412730)       <readonly/>
	I0630 14:06:07.031667 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031679 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='disk'>
	I0630 14:06:07.031689 1460091 main.go:141] libmachine: (addons-412730)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:06:07.031737 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk'/>
	I0630 14:06:07.031764 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hda' bus='virtio'/>
	I0630 14:06:07.031774 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031792 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031805 1460091 main.go:141] libmachine: (addons-412730)       <source network='mk-addons-412730'/>
	I0630 14:06:07.031820 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031854 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031878 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031890 1460091 main.go:141] libmachine: (addons-412730)       <source network='default'/>
	I0630 14:06:07.031901 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031909 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031919 1460091 main.go:141] libmachine: (addons-412730)     <serial type='pty'>
	I0630 14:06:07.031927 1460091 main.go:141] libmachine: (addons-412730)       <target port='0'/>
	I0630 14:06:07.031942 1460091 main.go:141] libmachine: (addons-412730)     </serial>
	I0630 14:06:07.031951 1460091 main.go:141] libmachine: (addons-412730)     <console type='pty'>
	I0630 14:06:07.031964 1460091 main.go:141] libmachine: (addons-412730)       <target type='serial' port='0'/>
	I0630 14:06:07.031975 1460091 main.go:141] libmachine: (addons-412730)     </console>
	I0630 14:06:07.031982 1460091 main.go:141] libmachine: (addons-412730)     <rng model='virtio'>
	I0630 14:06:07.031995 1460091 main.go:141] libmachine: (addons-412730)       <backend model='random'>/dev/random</backend>
	I0630 14:06:07.032001 1460091 main.go:141] libmachine: (addons-412730)     </rng>
	I0630 14:06:07.032011 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032016 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032026 1460091 main.go:141] libmachine: (addons-412730)   </devices>
	I0630 14:06:07.032034 1460091 main.go:141] libmachine: (addons-412730) </domain>
	I0630 14:06:07.032066 1460091 main.go:141] libmachine: (addons-412730) 
	I0630 14:06:07.037044 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:0d:7b:07 in network default
	I0630 14:06:07.037851 1460091 main.go:141] libmachine: (addons-412730) starting domain...
	I0630 14:06:07.037899 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:07.037908 1460091 main.go:141] libmachine: (addons-412730) ensuring networks are active...
	I0630 14:06:07.038725 1460091 main.go:141] libmachine: (addons-412730) Ensuring network default is active
	I0630 14:06:07.039106 1460091 main.go:141] libmachine: (addons-412730) Ensuring network mk-addons-412730 is active
	I0630 14:06:07.039715 1460091 main.go:141] libmachine: (addons-412730) getting domain XML...
	I0630 14:06:07.040672 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:08.319736 1460091 main.go:141] libmachine: (addons-412730) waiting for IP...
	I0630 14:06:08.320757 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.321298 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.321358 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.321305 1460113 retry.go:31] will retry after 217.608702ms: waiting for domain to come up
	I0630 14:06:08.541088 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.541707 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.541732 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.541668 1460113 retry.go:31] will retry after 322.22603ms: waiting for domain to come up
	I0630 14:06:08.865505 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.865965 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.865994 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.865925 1460113 retry.go:31] will retry after 339.049792ms: waiting for domain to come up
	I0630 14:06:09.206655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.207155 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.207213 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.207148 1460113 retry.go:31] will retry after 478.054487ms: waiting for domain to come up
	I0630 14:06:09.686885 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.687397 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.687426 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.687347 1460113 retry.go:31] will retry after 663.338232ms: waiting for domain to come up
	I0630 14:06:10.352433 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:10.352917 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:10.352942 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:10.352876 1460113 retry.go:31] will retry after 824.880201ms: waiting for domain to come up
	I0630 14:06:11.179557 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:11.180050 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:11.180081 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:11.180000 1460113 retry.go:31] will retry after 1.072535099s: waiting for domain to come up
	I0630 14:06:12.253993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:12.254526 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:12.254560 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:12.254433 1460113 retry.go:31] will retry after 1.120902402s: waiting for domain to come up
	I0630 14:06:13.376695 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:13.377283 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:13.377315 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:13.377244 1460113 retry.go:31] will retry after 1.419759095s: waiting for domain to come up
	I0630 14:06:14.799069 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:14.799546 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:14.799574 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:14.799514 1460113 retry.go:31] will retry after 1.843918596s: waiting for domain to come up
	I0630 14:06:16.645512 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:16.646025 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:16.646082 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:16.646003 1460113 retry.go:31] will retry after 2.785739179s: waiting for domain to come up
	I0630 14:06:19.434426 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:19.435055 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:19.435086 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:19.434987 1460113 retry.go:31] will retry after 2.736128675s: waiting for domain to come up
	I0630 14:06:22.172470 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:22.173071 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:22.173092 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:22.173042 1460113 retry.go:31] will retry after 3.042875133s: waiting for domain to come up
	I0630 14:06:25.219310 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:25.219910 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:25.219934 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:25.219869 1460113 retry.go:31] will retry after 4.255226103s: waiting for domain to come up
	I0630 14:06:29.478898 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479625 1460091 main.go:141] libmachine: (addons-412730) found domain IP: 192.168.39.114
	I0630 14:06:29.479653 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has current primary IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479661 1460091 main.go:141] libmachine: (addons-412730) reserving static IP address...
	I0630 14:06:29.480160 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find host DHCP lease matching {name: "addons-412730", mac: "52:54:00:ac:59:ff", ip: "192.168.39.114"} in network mk-addons-412730
	I0630 14:06:29.563376 1460091 main.go:141] libmachine: (addons-412730) reserved static IP address 192.168.39.114 for domain addons-412730
	I0630 14:06:29.563409 1460091 main.go:141] libmachine: (addons-412730) waiting for SSH...
	I0630 14:06:29.563418 1460091 main.go:141] libmachine: (addons-412730) DBG | Getting to WaitForSSH function...
	I0630 14:06:29.566605 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567079 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.567114 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567268 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH client type: external
	I0630 14:06:29.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa (-rw-------)
	I0630 14:06:29.567351 1460091 main.go:141] libmachine: (addons-412730) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:06:29.567371 1460091 main.go:141] libmachine: (addons-412730) DBG | About to run SSH command:
	I0630 14:06:29.567386 1460091 main.go:141] libmachine: (addons-412730) DBG | exit 0
	I0630 14:06:29.697378 1460091 main.go:141] libmachine: (addons-412730) DBG | SSH cmd err, output: <nil>: 
	I0630 14:06:29.697644 1460091 main.go:141] libmachine: (addons-412730) KVM machine creation complete
	I0630 14:06:29.698028 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:29.698656 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.698905 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.699080 1460091 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:06:29.699098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:29.700512 1460091 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:06:29.700530 1460091 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:06:29.700538 1460091 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:06:29.700545 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.702878 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703363 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.703393 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703678 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.703917 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704093 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704253 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.704472 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.704757 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.704772 1460091 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:06:29.825352 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:29.825394 1460091 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:06:29.825405 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.828698 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829249 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.829291 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.829702 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.829910 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.830086 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.830284 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.830503 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.830515 1460091 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:06:29.950727 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:06:29.950815 1460091 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:06:29.950829 1460091 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:06:29.950838 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951114 1460091 buildroot.go:166] provisioning hostname "addons-412730"
	I0630 14:06:29.951153 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951406 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.954775 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955251 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.955283 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955448 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.955676 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.955864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.956131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.956359 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.956598 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.956616 1460091 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-412730 && echo "addons-412730" | sudo tee /etc/hostname
	I0630 14:06:30.091933 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-412730
	
	I0630 14:06:30.091974 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.095576 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.095967 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.095993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.096193 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.096420 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096640 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096775 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.096955 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.097249 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.097278 1460091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-412730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-412730/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-412730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:06:30.228409 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:30.228455 1460091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1452140/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1452140/.minikube}
	I0630 14:06:30.228507 1460091 buildroot.go:174] setting up certificates
	I0630 14:06:30.228539 1460091 provision.go:84] configureAuth start
	I0630 14:06:30.228557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:30.228999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.232598 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233018 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.233052 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233306 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.235934 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236310 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.236353 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236511 1460091 provision.go:143] copyHostCerts
	I0630 14:06:30.236588 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.pem (1078 bytes)
	I0630 14:06:30.236717 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/cert.pem (1123 bytes)
	I0630 14:06:30.236771 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/key.pem (1675 bytes)
	I0630 14:06:30.236826 1460091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem org=jenkins.addons-412730 san=[127.0.0.1 192.168.39.114 addons-412730 localhost minikube]
	I0630 14:06:30.629859 1460091 provision.go:177] copyRemoteCerts
	I0630 14:06:30.629936 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:06:30.629965 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.633589 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634037 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.634067 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634292 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.634709 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.634951 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.635149 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.732351 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:06:30.765263 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:06:30.797980 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:06:30.829589 1460091 provision.go:87] duration metric: took 601.031936ms to configureAuth
	I0630 14:06:30.829626 1460091 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:06:30.829835 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:30.829875 1460091 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:06:30.829891 1460091 main.go:141] libmachine: (addons-412730) Calling .GetURL
	I0630 14:06:30.831493 1460091 main.go:141] libmachine: (addons-412730) DBG | using libvirt version 6000000
	I0630 14:06:30.834168 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834575 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.834608 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834836 1460091 main.go:141] libmachine: Docker is up and running!
	I0630 14:06:30.834858 1460091 main.go:141] libmachine: Reticulating splines...
	I0630 14:06:30.834867 1460091 client.go:171] duration metric: took 24.499610068s to LocalClient.Create
	I0630 14:06:30.834910 1460091 start.go:167] duration metric: took 24.499694666s to libmachine.API.Create "addons-412730"
	I0630 14:06:30.834925 1460091 start.go:293] postStartSetup for "addons-412730" (driver="kvm2")
	I0630 14:06:30.834938 1460091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:06:30.834971 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:30.835263 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:06:30.835291 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.837701 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838027 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.838070 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838230 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.838425 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.838615 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.838765 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.930536 1460091 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:06:30.935492 1460091 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:06:30.935534 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/addons for local assets ...
	I0630 14:06:30.935631 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/files for local assets ...
	I0630 14:06:30.935674 1460091 start.go:296] duration metric: took 100.742963ms for postStartSetup
	I0630 14:06:30.935713 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:30.936417 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.939655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940194 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.940223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940486 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:30.940676 1460091 start.go:128] duration metric: took 24.626491157s to createHost
	I0630 14:06:30.940701 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.943451 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.943947 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.943979 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.944167 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.944383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.944979 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.945339 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.945363 1460091 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:06:31.062586 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751292391.035640439
	
	I0630 14:06:31.062617 1460091 fix.go:216] guest clock: 1751292391.035640439
	I0630 14:06:31.062625 1460091 fix.go:229] Guest: 2025-06-30 14:06:31.035640439 +0000 UTC Remote: 2025-06-30 14:06:30.940689328 +0000 UTC m=+24.741258527 (delta=94.951111ms)
	I0630 14:06:31.062664 1460091 fix.go:200] guest clock delta is within tolerance: 94.951111ms
	I0630 14:06:31.062669 1460091 start.go:83] releasing machines lock for "addons-412730", held for 24.748599614s
	I0630 14:06:31.062697 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.063068 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:31.066256 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.066740 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.066774 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.067022 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067620 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067907 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.068104 1460091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:06:31.068165 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.068221 1460091 ssh_runner.go:195] Run: cat /version.json
	I0630 14:06:31.068250 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.071486 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.071690 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072008 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072043 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072103 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072204 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072375 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072476 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072559 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072632 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072686 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072859 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.072867 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.159582 1460091 ssh_runner.go:195] Run: systemctl --version
	I0630 14:06:31.186817 1460091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:06:31.193553 1460091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:06:31.193649 1460091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:06:31.215105 1460091 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:06:31.215137 1460091 start.go:495] detecting cgroup driver to use...
	I0630 14:06:31.215213 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0630 14:06:31.257543 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0630 14:06:31.273400 1460091 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:06:31.273466 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:06:31.289789 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:06:31.306138 1460091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:06:31.453571 1460091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:06:31.593173 1460091 docker.go:246] disabling docker service ...
	I0630 14:06:31.593260 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:06:31.610223 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:06:31.625803 1460091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:06:31.823510 1460091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:06:31.974811 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:06:31.996098 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:06:32.020154 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0630 14:06:32.033292 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0630 14:06:32.046251 1460091 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0630 14:06:32.046373 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0630 14:06:32.059569 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.072460 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0630 14:06:32.085242 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.098259 1460091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:06:32.111503 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0630 14:06:32.124063 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0630 14:06:32.136348 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0630 14:06:32.148960 1460091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:06:32.159881 1460091 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:06:32.159967 1460091 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:06:32.176065 1460091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:06:32.188348 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:32.325076 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:32.359838 1460091 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0630 14:06:32.359979 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.366616 1460091 retry.go:31] will retry after 624.469247ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0630 14:06:32.991518 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.997598 1460091 start.go:563] Will wait 60s for crictl version
	I0630 14:06:32.997677 1460091 ssh_runner.go:195] Run: which crictl
	I0630 14:06:33.002325 1460091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:06:33.045054 1460091 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0630 14:06:33.045186 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.074290 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.134404 1460091 out.go:177] * Preparing Kubernetes v1.33.2 on containerd 1.7.23 ...
	I0630 14:06:33.198052 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:33.201668 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202138 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:33.202162 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202486 1460091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:06:33.207929 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:33.224479 1460091 kubeadm.go:875] updating cluster {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412
730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:06:33.224651 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:33.224723 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:33.262407 1460091 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:06:33.262480 1460091 ssh_runner.go:195] Run: which lz4
	I0630 14:06:33.267241 1460091 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:06:33.272514 1460091 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:06:33.272561 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (420558900 bytes)
	I0630 14:06:34.883083 1460091 containerd.go:563] duration metric: took 1.615882395s to copy over tarball
	I0630 14:06:34.883194 1460091 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:06:36.966670 1460091 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08344467s)
	I0630 14:06:36.966710 1460091 containerd.go:570] duration metric: took 2.083586834s to extract the tarball
	I0630 14:06:36.966722 1460091 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:06:37.007649 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.150742 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:37.193070 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.245622 1460091 retry.go:31] will retry after 173.895536ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-06-30T14:06:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0630 14:06:37.420139 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.464724 1460091 containerd.go:627] all images are preloaded for containerd runtime.
	I0630 14:06:37.464758 1460091 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:06:37.464771 1460091 kubeadm.go:926] updating node { 192.168.39.114 8443 v1.33.2 containerd true true} ...
	I0630 14:06:37.464919 1460091 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-412730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:06:37.465002 1460091 ssh_runner.go:195] Run: sudo crictl info
	I0630 14:06:37.511001 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:37.511034 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:37.511049 1460091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:06:37.511083 1460091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-412730 NodeName:addons-412730 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:06:37.511271 1460091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-412730"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:06:37.511357 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:06:37.525652 1460091 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:06:37.525746 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:06:37.538805 1460091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0630 14:06:37.562031 1460091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:06:37.587566 1460091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0630 14:06:37.610218 1460091 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I0630 14:06:37.615571 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:37.632131 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.779642 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:37.816746 1460091 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730 for IP: 192.168.39.114
	I0630 14:06:37.816781 1460091 certs.go:194] generating shared ca certs ...
	I0630 14:06:37.816801 1460091 certs.go:226] acquiring lock for ca certs: {Name:mk0651a034eff71720267efe75974a64ed116095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:37.816978 1460091 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key
	I0630 14:06:38.156994 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt ...
	I0630 14:06:38.157034 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt: {Name:mkd96adf4b8dd000ef155465cd7541cb4dbc54f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157267 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key ...
	I0630 14:06:38.157285 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key: {Name:mk6da24087206aaf4a1c31ab7ae44030109e489f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157410 1460091 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key
	I0630 14:06:38.393807 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt ...
	I0630 14:06:38.393842 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt: {Name:mk321b6cabce084092be365d32608954916437e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394011 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key ...
	I0630 14:06:38.394022 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key: {Name:mk82210dbfc17828b961241482db840048e12b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394107 1460091 certs.go:256] generating profile certs ...
	I0630 14:06:38.394167 1460091 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key
	I0630 14:06:38.394181 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt with IP's: []
	I0630 14:06:39.030200 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt ...
	I0630 14:06:39.030240 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: {Name:mkc9df953aca8566f0870f2298300ff89b509f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030418 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key ...
	I0630 14:06:39.030431 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key: {Name:mka533b0514825fa7b24c00fc43d73342f608e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030498 1460091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367
	I0630 14:06:39.030521 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114]
	I0630 14:06:39.110277 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 ...
	I0630 14:06:39.110319 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367: {Name:mk48ce6fc18dec0b61c5b66960071aff2a24b262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110478 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 ...
	I0630 14:06:39.110491 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367: {Name:mk75d3bfb9efccf05811ea90591687efdb3f8988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110564 1460091 certs.go:381] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt
	I0630 14:06:39.110641 1460091 certs.go:385] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key
	I0630 14:06:39.110691 1460091 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key
	I0630 14:06:39.110708 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt with IP's: []
	I0630 14:06:39.311094 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt ...
	I0630 14:06:39.311131 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt: {Name:mkc683f67a11502b5bdeac9ab79459fda8dea4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311302 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key ...
	I0630 14:06:39.311315 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key: {Name:mk896db09a1f34404a9d7ba2ae83a6472f785239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311491 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:06:39.311529 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:06:39.311552 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:06:39.311574 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem (1675 bytes)
	I0630 14:06:39.312289 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:06:39.348883 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:06:39.387215 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:06:39.418089 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0630 14:06:39.456310 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:06:39.485942 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:06:39.518368 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:06:39.550454 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 14:06:39.582512 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:06:39.617828 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:06:39.640030 1460091 ssh_runner.go:195] Run: openssl version
	I0630 14:06:39.647364 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:06:39.660898 1460091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666460 1460091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666541 1460091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.674132 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:06:39.687542 1460091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:06:39.692849 1460091 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:06:39.692930 1460091 kubeadm.go:392] StartCluster: {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:06:39.693042 1460091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0630 14:06:39.693124 1460091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:06:39.733818 1460091 cri.go:89] found id: ""
	I0630 14:06:39.733920 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:06:39.748350 1460091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:06:39.762340 1460091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:06:39.774501 1460091 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:06:39.774532 1460091 kubeadm.go:157] found existing configuration files:
	
	I0630 14:06:39.774596 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:06:39.786405 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:06:39.786474 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:06:39.798586 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:06:39.809858 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:06:39.809932 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:06:39.822150 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.833619 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:06:39.833683 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.845682 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:06:39.856947 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:06:39.857015 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:06:39.870036 1460091 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:06:39.922555 1460091 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:06:39.922624 1460091 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:06:40.045815 1460091 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:06:40.045999 1460091 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:06:40.046138 1460091 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:06:40.052549 1460091 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:06:40.071818 1460091 out.go:235]   - Generating certificates and keys ...
	I0630 14:06:40.071955 1460091 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:06:40.072042 1460091 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:06:40.453325 1460091 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:06:40.505817 1460091 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:06:41.044548 1460091 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:06:41.417521 1460091 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:06:41.739226 1460091 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:06:41.739421 1460091 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:41.843371 1460091 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:06:41.843539 1460091 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:42.399109 1460091 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:06:42.840033 1460091 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:06:43.009726 1460091 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:06:43.009824 1460091 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:06:43.506160 1460091 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:06:43.698222 1460091 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:06:43.840816 1460091 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:06:44.231431 1460091 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:06:44.461049 1460091 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:06:44.461356 1460091 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:06:44.463997 1460091 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:06:44.465945 1460091 out.go:235]   - Booting up control plane ...
	I0630 14:06:44.466073 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:06:44.466167 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:06:44.466289 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:06:44.484244 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:06:44.494126 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:06:44.494220 1460091 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:06:44.678804 1460091 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:06:44.678979 1460091 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:06:45.689158 1460091 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.011115741s
	I0630 14:06:45.693304 1460091 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:06:45.693435 1460091 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.114:8443/livez
	I0630 14:06:45.694157 1460091 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:06:45.694324 1460091 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:06:48.529853 1460091 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.836599214s
	I0630 14:06:49.645556 1460091 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.952842655s
	I0630 14:06:51.692654 1460091 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.00153129s
	I0630 14:06:51.707013 1460091 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:06:51.730537 1460091 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:06:51.769844 1460091 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:06:51.770065 1460091 kubeadm.go:310] [mark-control-plane] Marking the node addons-412730 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:06:51.785586 1460091 kubeadm.go:310] [bootstrap-token] Using token: ggslqu.tjlqizciadnjmkc4
	I0630 14:06:51.787072 1460091 out.go:235]   - Configuring RBAC rules ...
	I0630 14:06:51.787249 1460091 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:06:51.798527 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:06:51.808767 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:06:51.813113 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:06:51.818246 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:06:51.822008 1460091 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:06:52.099709 1460091 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:06:52.594117 1460091 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:06:53.099418 1460091 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:06:53.100502 1460091 kubeadm.go:310] 
	I0630 14:06:53.100601 1460091 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:06:53.100613 1460091 kubeadm.go:310] 
	I0630 14:06:53.100755 1460091 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:06:53.100795 1460091 kubeadm.go:310] 
	I0630 14:06:53.100858 1460091 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:06:53.100965 1460091 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:06:53.101053 1460091 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:06:53.101065 1460091 kubeadm.go:310] 
	I0630 14:06:53.101171 1460091 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:06:53.101191 1460091 kubeadm.go:310] 
	I0630 14:06:53.101279 1460091 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:06:53.101291 1460091 kubeadm.go:310] 
	I0630 14:06:53.101389 1460091 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:06:53.101534 1460091 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:06:53.101651 1460091 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:06:53.101664 1460091 kubeadm.go:310] 
	I0630 14:06:53.101782 1460091 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:06:53.101913 1460091 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:06:53.101931 1460091 kubeadm.go:310] 
	I0630 14:06:53.102062 1460091 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102204 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 \
	I0630 14:06:53.102237 1460091 kubeadm.go:310] 	--control-plane 
	I0630 14:06:53.102246 1460091 kubeadm.go:310] 
	I0630 14:06:53.102351 1460091 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:06:53.102362 1460091 kubeadm.go:310] 
	I0630 14:06:53.102448 1460091 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102611 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 
	I0630 14:06:53.104820 1460091 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:06:53.104859 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:53.104869 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:53.106742 1460091 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:06:53.108147 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:06:53.121105 1460091 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:06:53.146410 1460091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:06:53.146477 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.146567 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-412730 minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-412730 minikube.k8s.io/primary=true
	I0630 14:06:53.306096 1460091 ops.go:34] apiserver oom_adj: -16
	I0630 14:06:53.306244 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.806580 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.306722 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.807256 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.306344 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.807179 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.306640 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.807184 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.895027 1460091 kubeadm.go:1105] duration metric: took 3.748614141s to wait for elevateKubeSystemPrivileges
	I0630 14:06:56.895079 1460091 kubeadm.go:394] duration metric: took 17.202154504s to StartCluster
	I0630 14:06:56.895108 1460091 settings.go:142] acquiring lock: {Name:mk841f56cd7a9b39ff7ba20d8e74be5d85ec1f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895268 1460091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:56.895670 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/kubeconfig: {Name:mkaf116de3c28eb3dfd9964f3211c065b2db02a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895901 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:06:56.895932 1460091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:56.895997 1460091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:06:56.896117 1460091 addons.go:69] Setting yakd=true in profile "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:238] Setting addon yakd=true in "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:69] Setting ingress=true in profile "addons-412730"
	I0630 14:06:56.896159 1460091 addons.go:238] Setting addon ingress=true in "addons-412730"
	I0630 14:06:56.896176 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896165 1460091 addons.go:69] Setting registry=true in profile "addons-412730"
	I0630 14:06:56.896200 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896203 1460091 addons.go:238] Setting addon registry=true in "addons-412730"
	I0630 14:06:56.896203 1460091 addons.go:69] Setting inspektor-gadget=true in profile "addons-412730"
	I0630 14:06:56.896223 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.896233 1460091 addons.go:238] Setting addon inspektor-gadget=true in "addons-412730"
	I0630 14:06:56.896223 1460091 addons.go:69] Setting metrics-server=true in profile "addons-412730"
	I0630 14:06:56.896245 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896253 1460091 addons.go:238] Setting addon metrics-server=true in "addons-412730"
	I0630 14:06:56.896265 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896276 1460091 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896285 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896287 1460091 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-412730"
	I0630 14:06:56.896305 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896570 1460091 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-412730"
	I0630 14:06:56.896661 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-412730"
	I0630 14:06:56.896723 1460091 addons.go:69] Setting volcano=true in profile "addons-412730"
	I0630 14:06:56.896778 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896785 1460091 addons.go:69] Setting registry-creds=true in profile "addons-412730"
	I0630 14:06:56.896751 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896799 1460091 addons.go:69] Setting volumesnapshots=true in profile "addons-412730"
	I0630 14:06:56.896804 1460091 addons.go:238] Setting addon registry-creds=true in "addons-412730"
	I0630 14:06:56.896811 1460091 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896816 1460091 addons.go:238] Setting addon volumesnapshots=true in "addons-412730"
	I0630 14:06:56.896825 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896830 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896835 1460091 addons.go:69] Setting cloud-spanner=true in profile "addons-412730"
	I0630 14:06:56.896838 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896836 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896852 1460091 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-412730"
	I0630 14:06:56.896876 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896897 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896918 1460091 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:06:56.896941 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897097 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897165 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897187 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897280 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897295 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896826 1460091 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-412730"
	I0630 14:06:56.897181 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897361 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896845 1460091 addons.go:238] Setting addon cloud-spanner=true in "addons-412730"
	I0630 14:06:56.897199 1460091 addons.go:69] Setting storage-provisioner=true in profile "addons-412730"
	I0630 14:06:56.897456 1460091 addons.go:238] Setting addon storage-provisioner=true in "addons-412730"
	I0630 14:06:56.897488 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897499 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897606 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897876 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897886 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897898 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897978 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896791 1460091 addons.go:238] Setting addon volcano=true in "addons-412730"
	I0630 14:06:56.898062 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896771 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898162 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896767 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898520 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897212 1460091 addons.go:69] Setting default-storageclass=true in profile "addons-412730"
	I0630 14:06:56.898795 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-412730"
	I0630 14:06:56.899315 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899389 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897224 1460091 addons.go:69] Setting gcp-auth=true in profile "addons-412730"
	I0630 14:06:56.899644 1460091 mustload.go:65] Loading cluster: addons-412730
	I0630 14:06:56.897241 1460091 addons.go:69] Setting ingress-dns=true in profile "addons-412730"
	I0630 14:06:56.899700 1460091 addons.go:238] Setting addon ingress-dns=true in "addons-412730"
	I0630 14:06:56.899796 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896785 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899911 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897328 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.899604 1460091 out.go:177] * Verifying Kubernetes components...
	I0630 14:06:56.915173 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:56.925317 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0630 14:06:56.933471 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0630 14:06:56.933567 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.933596 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0630 14:06:56.934049 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934108 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934159 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934204 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934401 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934443 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.938799 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0630 14:06:56.939041 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0630 14:06:56.939193 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0630 14:06:56.939457 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0630 14:06:56.939729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940028 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940309 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.940326 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.940413 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940931 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941099 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.941112 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.941179 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.941232 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941301 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941738 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.941788 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.942491 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942515 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942624 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.942661 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942683 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942765 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.942792 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942805 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943018 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943153 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943163 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943215 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.943262 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944142 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944175 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944193 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944211 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944294 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944358 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.945770 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.945856 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.946237 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.946282 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.947082 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.947128 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.948967 1460091 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-412730"
	I0630 14:06:56.949015 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.949453 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.949501 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.962217 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.962296 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.973604 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0630 14:06:56.974149 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.974664 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.974695 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.975099 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.975299 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.975756 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0630 14:06:56.977204 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.977635 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.977698 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.977979 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.978793 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.978814 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.979233 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.979861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.979908 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.983635 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0630 14:06:56.984067 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0630 14:06:56.984613 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.985289 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.985309 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.985797 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.986422 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.986466 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.987326 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0630 14:06:56.987554 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I0630 14:06:56.988111 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.988781 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.988800 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.988868 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0630 14:06:56.989272 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989514 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989982 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990005 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990076 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990136 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990167 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990395 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990688 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.990745 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.991420 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.992366 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.992419 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.992669 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I0630 14:06:56.993907 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.995228 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.995248 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.995880 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.997265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.999293 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0630 14:06:56.999370 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.001508 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:06:57.002883 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:06:57.002916 1460091 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:06:57.002942 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.003610 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0630 14:06:57.005195 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0630 14:06:57.005935 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.005991 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.006255 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0630 14:06:57.006289 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006456 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006802 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.007205 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0630 14:06:57.007321 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0630 14:06:57.007438 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007452 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007601 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007616 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007742 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007767 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.008050 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008112 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.008285 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008675 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.008703 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.008723 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.008787 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.008808 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.009263 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009378 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.009421 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009781 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.010031 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.010108 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.010355 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010373 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010513 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010533 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010629 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.010969 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.010977 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.011283 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.011304 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.011392 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.011650 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.011783 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.011867 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012379 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.012423 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012599 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.012859 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.012877 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.013047 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.013778 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.014215 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.014495 1460091 addons.go:238] Setting addon default-storageclass=true in "addons-412730"
	I0630 14:06:57.014541 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:57.014778 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.014972 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.015012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.015647 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017091 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017305 1460091 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:06:57.017315 1460091 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:06:57.019235 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:06:57.019245 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:06:57.019258 1460091 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:06:57.019263 1460091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019356 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:06:57.020515 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45803
	I0630 14:06:57.020579 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:57.020596 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:06:57.020635 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.021372 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.021977 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.022038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.022485 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.023104 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.023180 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.023405 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.023860 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.023897 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.025612 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.025864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.025948 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0630 14:06:57.026240 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.026420 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.026868 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.028570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029396 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029420 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029587 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.029699 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.029761 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029777 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029959 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030089 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.030322 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.030383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030669 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.031123 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.031274 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.031289 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.031683 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.037907 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.038177 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.039744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0630 14:06:57.039978 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0630 14:06:57.040537 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.040729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.041308 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041328 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041600 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041615 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041928 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.042164 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.042315 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.044033 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0630 14:06:57.044725 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.045331 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.045350 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.045878 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.045938 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.046425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0630 14:06:57.047116 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.047396 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.047496 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.048257 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.048279 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.048498 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049312 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049440 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.049911 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.050622 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:06:57.050709 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.051429 1460091 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:06:57.051993 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.053508 1460091 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:57.053531 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:06:57.053554 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.054413 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42375
	I0630 14:06:57.054437 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.054478 1460091 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:06:57.054413 1460091 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:06:57.054933 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.055768 1460091 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:57.055790 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:06:57.055812 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.055852 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.055876 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.056303 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.056581 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:06:57.056594 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.056599 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:06:57.056622 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.057388 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:06:57.058752 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:57.058770 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:06:57.058788 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.059503 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060288 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.060307 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060551 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.060762 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.060918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.060980 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.061036 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.061516 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0630 14:06:57.062190 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.062207 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062733 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.062771 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.062855 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.062894 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.063152 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.063283 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.063407 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.063631 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.1
	I0630 14:06:57.063848 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.063854 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0630 14:06:57.063891 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0630 14:06:57.064349 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.064387 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064484 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064596 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064660 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064704 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.064881 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.064942 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.065098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.065315 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.065331 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.065402 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.065624 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.066156 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.066196 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.066203 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.1
	I0630 14:06:57.066852 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.066874 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.066915 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0630 14:06:57.067252 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.067449 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.067944 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.068048 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.068097 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.068228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.068613 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.068623 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.068822 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.068891 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.1
	I0630 14:06:57.069115 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.069121 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069356 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0630 14:06:57.069576 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.070270 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.070286 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.070342 1460091 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:06:57.071005 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.071129 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.071152 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.071943 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.071951 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:06:57.071970 1460091 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:06:57.071992 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.072108 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.072154 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.072685 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:06:57.072774 1460091 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.072798 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498069 bytes)
	I0630 14:06:57.072818 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.073341 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.074059 1460091 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:06:57.074063 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:57.074155 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:06:57.074179 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.075067 1460091 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:06:57.075229 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:57.075246 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:06:57.075572 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.076243 1460091 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.076303 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:06:57.076329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.078812 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43631
	I0630 14:06:57.079025 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.079130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.079652 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.080327 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.080351 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.080481 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.080507 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.080634 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.080858 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.081036 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.081055 1460091 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:06:57.081228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.081763 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.082138 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082262 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082706 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.082752 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.083020 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083040 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083087 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083100 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.083494 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.083497 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083593 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083786 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.083977 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.084112 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.084235 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.084469 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084506 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.084520 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084738 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.084918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085065 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085095 1460091 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:06:57.085067 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.085223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.085318 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.085373 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.085526 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085673 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085865 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.086430 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:57.086442 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:06:57.086455 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.087486 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0630 14:06:57.087965 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.088516 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.088545 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.089121 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.089329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.089866 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090528 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.090554 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090740 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.090964 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.091072 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.091131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.091254 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.092992 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:06:57.094599 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:06:57.095998 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:06:57.097039 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:06:57.098265 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:06:57.099547 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:06:57.100645 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:06:57.101875 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:06:57.103299 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:06:57.103321 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:06:57.103347 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.107000 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0630 14:06:57.107083 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107594 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.107627 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.107650 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107840 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.108051 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.108244 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.108441 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.108455 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.108453 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.108913 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.109191 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.111002 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.111252 1460091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:57.111268 1460091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:06:57.111288 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.114635 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115172 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.115248 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115422 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.115624 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.115796 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.115964 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	W0630 14:06:57.363795 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.363842 1460091 retry.go:31] will retry after 315.136796ms: ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	W0630 14:06:57.364018 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.364049 1460091 retry.go:31] will retry after 155.525336ms: ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.701875 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:06:57.701976 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:57.837038 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:06:57.837063 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:06:57.838628 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.843008 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:06:57.843041 1460091 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:06:57.872159 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.909976 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:57.910010 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:06:57.932688 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:06:57.932733 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:06:57.995639 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:58.066461 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:58.080857 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:06:58.080899 1460091 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:06:58.095890 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:58.137462 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:58.206306 1460091 node_ready.go:35] waiting up to 6m0s for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209015 1460091 node_ready.go:49] node "addons-412730" is "Ready"
	I0630 14:06:58.209060 1460091 node_ready.go:38] duration metric: took 2.705097ms for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209080 1460091 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:06:58.209140 1460091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:06:58.223118 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:58.377311 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:58.393265 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:58.552870 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:58.629965 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:06:58.630008 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:06:58.758806 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:06:58.758842 1460091 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:06:58.850972 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:58.851001 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:06:59.026553 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:06:59.026591 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:06:59.029024 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:06:59.029049 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:06:59.194467 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:59.225323 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:06:59.225365 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:06:59.275081 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.275114 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:06:59.277525 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:59.360873 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:06:59.360922 1460091 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:06:59.365441 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:06:59.365473 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:06:59.479182 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:06:59.479223 1460091 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:06:59.632112 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.730609 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:06:59.730651 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:06:59.924237 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:06:59.924273 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:06:59.952744 1460091 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:06:59.952779 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:07:00.295758 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:07:00.295801 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:07:00.609047 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:00.711006 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:07:01.077427 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:07:01.077478 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:07:01.488779 1460091 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.786858112s)
	I0630 14:07:01.488824 1460091 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:07:01.488851 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.650181319s)
	I0630 14:07:01.488917 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.488939 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489367 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489386 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.489398 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.489407 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489675 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489692 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.519482 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:07:01.519507 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:07:01.953943 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:07:01.953981 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:07:02.000299 1460091 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-412730" context rescaled to 1 replicas
	I0630 14:07:02.634511 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:07:02.634547 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:07:03.286523 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:07:03.286560 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:07:03.817225 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:03.817256 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:07:04.096118 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:07:04.096173 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:04.099962 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100533 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:04.100570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100887 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:04.101144 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:04.101379 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:04.101559 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:04.500309 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:05.218352 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:07:05.643348 1460091 addons.go:238] Setting addon gcp-auth=true in "addons-412730"
	I0630 14:07:05.643433 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:07:05.643934 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.643986 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.660744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0630 14:07:05.661458 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.662215 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.662238 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.662683 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.663335 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.663379 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.682214 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0630 14:07:05.683058 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.683766 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.683791 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.684301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.684542 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:07:05.686376 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:07:05.686632 1460091 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:07:05.686663 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:05.690202 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.690836 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:05.690876 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.691075 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:05.691278 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:05.691467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:05.691655 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:11.565837 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.693634263s)
	I0630 14:07:11.565899 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.565914 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.565980 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.570295044s)
	I0630 14:07:11.566027 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.499537s)
	I0630 14:07:11.566089 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.470173071s)
	I0630 14:07:11.566122 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566098 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566168 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566176 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.42868021s)
	I0630 14:07:11.566202 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566212 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566039 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566229 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566137 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566252 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566260 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566283 1460091 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (13.357116893s)
	I0630 14:07:11.566302 1460091 api_server.go:72] duration metric: took 14.670334608s to wait for apiserver process to appear ...
	I0630 14:07:11.566309 1460091 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:07:11.566329 1460091 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0630 14:07:11.566328 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (13.343175575s)
	I0630 14:07:11.566350 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566360 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566359 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (13.189016834s)
	I0630 14:07:11.566380 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566389 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566439 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566447 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566456 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566462 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566686 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566727 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566737 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566745 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566773 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566782 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566789 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566794 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566839 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566844 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.173547374s)
	I0630 14:07:11.566862 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566868 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566871 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566874 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567113 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567151 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567170 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567176 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567183 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567190 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567203 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567217 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567249 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567258 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567271 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567282 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567329 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567335 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567250 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567548 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567578 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567585 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567976 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568014 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568021 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.568825 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568856 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568865 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569293 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.016393005s)
	I0630 14:07:11.569320 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569328 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569412 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.374918327s)
	I0630 14:07:11.569425 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569431 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569478 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.291926439s)
	I0630 14:07:11.569490 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569497 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569593 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.937451446s)
	I0630 14:07:11.569615 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569624 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569735 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.960641721s)
	W0630 14:07:11.569757 1460091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569775 1460091 retry.go:31] will retry after 330.589533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569820 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.858779326s)
	I0630 14:07:11.569834 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569841 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570507 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570534 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570540 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570547 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570552 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570841 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570867 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570873 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570879 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570884 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570993 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571027 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571032 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571041 1460091 addons.go:479] Verifying addon metrics-server=true in "addons-412730"
	I0630 14:07:11.571778 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571807 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571816 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571823 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.571830 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.571917 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572331 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572343 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572353 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.572362 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.572758 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572789 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572797 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572807 1460091 addons.go:479] Verifying addon ingress=true in "addons-412730"
	I0630 14:07:11.573202 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573214 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573223 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573229 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573243 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573257 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573283 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573302 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573308 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573315 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573321 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573502 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573535 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573568 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573586 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573947 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573962 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573971 1460091 addons.go:479] Verifying addon registry=true in "addons-412730"
	I0630 14:07:11.574975 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575195 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.575240 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575258 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575424 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575449 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.574703 1460091 out.go:177] * Verifying ingress addon...
	I0630 14:07:11.574951 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.576902 1460091 out.go:177] * Verifying registry addon...
	I0630 14:07:11.577803 1460091 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-412730 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:07:11.578734 1460091 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:07:11.579547 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:07:11.618799 1460091 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0630 14:07:11.642386 1460091 api_server.go:141] control plane version: v1.33.2
	I0630 14:07:11.642428 1460091 api_server.go:131] duration metric: took 76.109211ms to wait for apiserver health ...
	I0630 14:07:11.642442 1460091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:07:11.648379 1460091 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:07:11.648411 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:11.648426 1460091 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:07:11.648448 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:11.787935 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.787961 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.788293 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.788355 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:07:11.788482 1460091 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:07:11.788776 1460091 system_pods.go:59] 17 kube-system pods found
	I0630 14:07:11.788844 1460091 system_pods.go:61] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.788873 1460091 system_pods.go:61] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.788883 1460091 system_pods.go:61] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.788891 1460091 system_pods.go:61] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.788902 1460091 system_pods.go:61] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.788912 1460091 system_pods.go:61] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.788923 1460091 system_pods.go:61] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.788933 1460091 system_pods.go:61] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.788941 1460091 system_pods.go:61] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.788951 1460091 system_pods.go:61] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.788965 1460091 system_pods.go:61] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.788979 1460091 system_pods.go:61] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.788992 1460091 system_pods.go:61] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.789005 1460091 system_pods.go:61] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.789017 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789029 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789036 1460091 system_pods.go:61] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.789049 1460091 system_pods.go:74] duration metric: took 146.59926ms to wait for pod list to return data ...
	I0630 14:07:11.789066 1460091 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:07:11.852937 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.852969 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.853375 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.853431 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.853445 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.859436 1460091 default_sa.go:45] found service account: "default"
	I0630 14:07:11.859476 1460091 default_sa.go:55] duration metric: took 70.393128ms for default service account to be created ...
	I0630 14:07:11.859487 1460091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:07:11.900655 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:11.926835 1460091 system_pods.go:86] 18 kube-system pods found
	I0630 14:07:11.926878 1460091 system_pods.go:89] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.926886 1460091 system_pods.go:89] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.926914 1460091 system_pods.go:89] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.926919 1460091 system_pods.go:89] "csi-hostpathplugin-z9jlw" [9852b523-2f8d-4c9a-85e8-7ac58ed5eebb] Pending
	I0630 14:07:11.926925 1460091 system_pods.go:89] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.926931 1460091 system_pods.go:89] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.926940 1460091 system_pods.go:89] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.926949 1460091 system_pods.go:89] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.926958 1460091 system_pods.go:89] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.926966 1460091 system_pods.go:89] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.926977 1460091 system_pods.go:89] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.926990 1460091 system_pods.go:89] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.927011 1460091 system_pods.go:89] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.927030 1460091 system_pods.go:89] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.927042 1460091 system_pods.go:89] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.927050 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927061 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927074 1460091 system_pods.go:89] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.927089 1460091 system_pods.go:126] duration metric: took 67.593682ms to wait for k8s-apps to be running ...
	I0630 14:07:11.927104 1460091 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:07:11.927169 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:07:12.193770 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.193803 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.354834 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.854466413s)
	I0630 14:07:12.354924 1460091 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.668263946s)
	I0630 14:07:12.354926 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355156 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355521 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.355577 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.355605 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:12.355625 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355646 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355981 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.356003 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.356015 1460091 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:07:12.356885 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:07:12.357715 1460091 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:07:12.359034 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:07:12.359721 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:07:12.360023 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:07:12.360041 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:07:12.406216 1460091 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:07:12.406263 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:12.559364 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:07:12.559403 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:07:12.584643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.585219 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.665811 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.665844 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:07:12.836140 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.865786 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.084231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.084272 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.365331 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.585910 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.586224 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.635029 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.734314641s)
	I0630 14:07:13.635075 1460091 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.707884059s)
	I0630 14:07:13.635092 1460091 system_svc.go:56] duration metric: took 1.707986766s WaitForService to wait for kubelet
	I0630 14:07:13.635101 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635119 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635108 1460091 kubeadm.go:578] duration metric: took 16.739135366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:07:13.635141 1460091 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:07:13.635462 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635484 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.635497 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635507 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635808 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635828 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.638761 1460091 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:07:13.638792 1460091 node_conditions.go:123] node cpu capacity is 2
	I0630 14:07:13.638809 1460091 node_conditions.go:105] duration metric: took 3.661934ms to run NodePressure ...
	I0630 14:07:13.638826 1460091 start.go:241] waiting for startup goroutines ...
	I0630 14:07:13.875752 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.024111 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.187911729s)
	I0630 14:07:14.024195 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024227 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024586 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.024683 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.024691 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.024702 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024712 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024994 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.025013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.025043 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.026382 1460091 addons.go:479] Verifying addon gcp-auth=true in "addons-412730"
	I0630 14:07:14.029054 1460091 out.go:177] * Verifying gcp-auth addon...
	I0630 14:07:14.031483 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:07:14.064027 1460091 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:07:14.064055 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.100781 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.114141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.365832 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.534739 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.583821 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.584016 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.864558 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.035462 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.083316 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.083872 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.363154 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.536843 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.584338 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.585465 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.864842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.084017 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.084651 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.497202 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.537408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.584546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.587004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.863546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.082833 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.083256 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.367136 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.536257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.583638 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.584977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.896589 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.083625 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.084228 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.363753 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.535354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.583096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.583122 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.955635 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.083049 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.083420 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.364160 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.536108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.582458 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.583611 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.862653 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.034233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.082846 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.083682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.364310 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.535698 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.583894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.583979 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.863445 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.036429 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.084981 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.085104 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.363349 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.706174 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.707208 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.707678 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.865772 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.035893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.083199 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.084016 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.364233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.535367 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.583354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.583535 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.865792 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.035789 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.136995 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.137134 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.363626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.535937 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.582498 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.583466 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.864738 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.034476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.083541 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.084048 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.364616 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.536239 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.583008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.583026 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.864935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.082940 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.083056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.363774 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.534897 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.583749 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.583954 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.863865 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.034706 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.084015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:26.084175 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.363040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.536862 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.583797 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.583943 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.189951 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.190109 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.190223 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.191199 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.366231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.535516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.584025 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.584989 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.864198 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.037431 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.082788 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.083975 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.363252 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.535710 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.583888 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.584004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.864040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.034895 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.082915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.083605 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.363381 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.535032 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.582676 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.583815 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.865439 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.036869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.084069 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.084108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.364800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.535912 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.583840 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.585080 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.864767 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.044830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.084386 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.084487 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.364893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.623955 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.624096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.625461 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.863871 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.035869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.085127 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.086207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.373662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.539255 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.587456 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.588975 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.863384 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.037175 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.083368 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.086594 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.535971 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.582220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.583079 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.864086 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.035104 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.087614 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.090507 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.364243 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.535472 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.582842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.583065 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.864351 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.038245 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.083459 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.083968 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.364140 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.535203 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.583507 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.583504 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.864421 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.035870 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.082290 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.083322 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.363896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.536935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.592002 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.592024 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.867249 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.035497 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.082561 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.083545 1460091 kapi.go:107] duration metric: took 25.503987228s to wait for kubernetes.io/minikube-addons=registry ...
	I0630 14:07:37.364896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.535915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.582416 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.863882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.035195 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.084077 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.363908 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.536012 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.582871 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.865977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.036008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.083221 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.366301 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.537043 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.584445 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.864115 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.035178 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.082503 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.364953 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.539118 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.582790 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.920318 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.039974 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.140897 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:41.363490 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.536671 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.584110 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151839 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.151893 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151941 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.364151 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.535860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.637454 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.869058 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.034755 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.083141 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.365516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.539831 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.585574 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.867882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.035437 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.083399 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.364009 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.534997 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.582616 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.865028 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.034987 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.083033 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.363797 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.536061 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.582192 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.863930 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.035610 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.082940 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.363183 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.536317 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.582800 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.863634 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.035461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.082263 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.364204 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.537008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.638719 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.867382 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.035628 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.082998 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.363676 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.535845 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.583373 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.865933 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.035994 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.082615 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.364741 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.763038 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.763188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.864019 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.034923 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.081789 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:50.363509 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.536302 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.582756 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.084972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.085222 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.088586 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.365037 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.536393 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.583205 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.863948 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.036793 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.083280 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.363764 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.534903 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.582225 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.863489 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.035662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.083237 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.535229 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.582794 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.864519 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.035606 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.083006 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.363649 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.534894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.582432 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.874053 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.036295 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.138176 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.439408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.536289 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.583387 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.877077 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.038681 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.088650 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.364716 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.537099 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.638302 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.888274 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.065461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.082558 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.364271 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.537383 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.584203 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.864829 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.035093 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.082842 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.368712 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.536145 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.583188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.864081 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.035171 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.082395 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.363881 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.770427 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.775289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.886727 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.036389 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.138257 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.365066 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.543394 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.587828 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.862860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.045510 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.084722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.370626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.543476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.643717 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.863100 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.036395 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.083306 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.364022 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.536447 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.582849 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.863402 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.043769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.084338 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.364984 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.537068 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.583105 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.873833 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.064570 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.165207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.363705 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.534655 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.582773 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.865214 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.040132 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.082101 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.364071 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.535996 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.583847 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.864830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.035167 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.082727 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.364040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.536325 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.584424 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.867769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:07.035374 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.085873 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:07.363748 1460091 kapi.go:107] duration metric: took 55.004020875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0630 14:08:07.535663 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.583300 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.036340 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.083025 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.537501 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.583289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.035787 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.083288 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.536861 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.895410 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.036972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.103056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.537875 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.583172 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.036116 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.082706 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.537110 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.583096 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.035141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.083220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.535683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.583269 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.035346 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.085856 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.535419 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.584214 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.086182 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.538450 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.584164 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.035469 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.082710 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.535978 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.584976 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.035643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.083354 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.536216 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.582722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.036015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.082827 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.535105 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.582197 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.036044 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.082594 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.535731 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.636867 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.040011 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.084634 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.538800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.584691 1460091 kapi.go:107] duration metric: took 1m8.005950872s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:08:20.046904 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:20.544735 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.045744 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.545748 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.039630 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.538370 1460091 kapi.go:107] duration metric: took 1m8.506886725s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:08:22.539980 1460091 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-412730 cluster.
	I0630 14:08:22.541245 1460091 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:08:22.542490 1460091 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:08:22.544085 1460091 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, volcano, inspektor-gadget, registry-creds, cloud-spanner, metrics-server, ingress-dns, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0630 14:08:22.545451 1460091 addons.go:514] duration metric: took 1m25.649456906s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin volcano inspektor-gadget registry-creds cloud-spanner metrics-server ingress-dns storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0630 14:08:22.545505 1460091 start.go:246] waiting for cluster config update ...
	I0630 14:08:22.545527 1460091 start.go:255] writing updated cluster config ...
	I0630 14:08:22.545830 1460091 ssh_runner.go:195] Run: rm -f paused
	I0630 14:08:22.552874 1460091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:22.645593 1460091 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.650587 1460091 pod_ready.go:94] pod "coredns-674b8bbfcf-55nn4" is "Ready"
	I0630 14:08:22.650616 1460091 pod_ready.go:86] duration metric: took 4.992795ms for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.653714 1460091 pod_ready.go:83] waiting for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.658042 1460091 pod_ready.go:94] pod "etcd-addons-412730" is "Ready"
	I0630 14:08:22.658066 1460091 pod_ready.go:86] duration metric: took 4.323836ms for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.660310 1460091 pod_ready.go:83] waiting for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.664410 1460091 pod_ready.go:94] pod "kube-apiserver-addons-412730" is "Ready"
	I0630 14:08:22.664433 1460091 pod_ready.go:86] duration metric: took 4.099276ms for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.666354 1460091 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.958219 1460091 pod_ready.go:94] pod "kube-controller-manager-addons-412730" is "Ready"
	I0630 14:08:22.958253 1460091 pod_ready.go:86] duration metric: took 291.880924ms for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.158459 1460091 pod_ready.go:83] waiting for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.557555 1460091 pod_ready.go:94] pod "kube-proxy-mgntr" is "Ready"
	I0630 14:08:23.557587 1460091 pod_ready.go:86] duration metric: took 399.092549ms for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.758293 1460091 pod_ready.go:83] waiting for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157033 1460091 pod_ready.go:94] pod "kube-scheduler-addons-412730" is "Ready"
	I0630 14:08:24.157070 1460091 pod_ready.go:86] duration metric: took 398.746217ms for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157088 1460091 pod_ready.go:40] duration metric: took 1.604151264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:24.206500 1460091 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:08:24.208969 1460091 out.go:177] * Done! kubectl is now configured to use "addons-412730" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	46e9c486237cc       56cc512116c8f       6 minutes ago       Running             busybox                                  0                   5b8f43d306a71       busybox
	a41e1f5d78ba3       158e2f2d90f21       12 minutes ago      Running             controller                               0                   ad79beda1cd96       ingress-nginx-controller-67687b59dd-vvcrv
	0383a04db64b6       738351fd438f0       13 minutes ago      Running             csi-snapshotter                          0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	b2d34cd3b4b5f       931dbfd16f87c       13 minutes ago      Running             csi-provisioner                          0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	7083636dce9aa       e899260153aed       13 minutes ago      Running             liveness-probe                           0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	bfebc08e181a7       e255e073c508c       13 minutes ago      Running             hostpath                                 0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	49bfc828f9828       88ef14a257f42       13 minutes ago      Running             node-driver-registrar                    0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	02d5183cb541e       19a639eda60f0       13 minutes ago      Running             csi-resizer                              0                   1b37be17df7f2       csi-hostpath-resizer-0
	40b28663fd84f       a1ed5895ba635       13 minutes ago      Running             csi-external-health-monitor-controller   0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	b66ddaac6e88a       59cbb42146a37       13 minutes ago      Running             csi-attacher                             0                   6f9489fdc4235       csi-hostpath-attacher-0
	2c3efa502f6ac       0ea86a0862033       13 minutes ago      Exited              patch                                    0                   479724e3cf758       ingress-nginx-admission-patch-fl6cb
	dca6ca157e955       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   82ccf34d900ac       snapshot-controller-68b874b76f-v6vkl
	8ff6da260516f       0ea86a0862033       13 minutes ago      Exited              create                                   0                   104d25c1177d7       ingress-nginx-admission-create-gpszb
	b61ad9d665eb6       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   9aa1ac650c210       snapshot-controller-68b874b76f-pn4tl
	2618e4dc11783       30dd67412fdea       13 minutes ago      Running             minikube-ingress-dns                     0                   0fd95f2b44624       kube-ingress-dns-minikube
	811184505fb18       d5e667c0f2bb6       13 minutes ago      Running             amd-gpu-device-plugin                    0                   b44acdeabc7e9       amd-gpu-device-plugin-jk4pf
	60e507365f1d3       6e38f40d628db       14 minutes ago      Running             storage-provisioner                      0                   c81c97cad8c5e       storage-provisioner
	8e1e019f61b20       1cf5f116067c6       14 minutes ago      Running             coredns                                  0                   f0e3a5c4dc1ba       coredns-674b8bbfcf-55nn4
	e9d272ef95cc8       661d404f36f01       14 minutes ago      Running             kube-proxy                               0                   ec083bc9ceaf6       kube-proxy-mgntr
	cda40c61e5780       cfed1ff748928       14 minutes ago      Running             kube-scheduler                           0                   8b62447a9ffbc       kube-scheduler-addons-412730
	0f5bd8617276d       ee794efa53d85       14 minutes ago      Running             kube-apiserver                           0                   296d470d26007       kube-apiserver-addons-412730
	ed722ba732c02       ff4f56c76b82d       14 minutes ago      Running             kube-controller-manager                  0                   6de0b1c4abb94       kube-controller-manager-addons-412730
	0aa8fdef51063       499038711c081       14 minutes ago      Running             etcd                                     0                   2ea511d5408a9       etcd-addons-412730
	
	
	==> containerd <==
	Jun 30 14:20:54 addons-412730 containerd[860]: time="2025-06-30T14:20:54.125599602Z" level=info msg="RemovePodSandbox for \"ae20e9cc5d702cca611c6d794412460e8dc6f4dc7453ff5059d03566bf754215\""
	Jun 30 14:20:54 addons-412730 containerd[860]: time="2025-06-30T14:20:54.125819490Z" level=info msg="Forcibly stopping sandbox \"ae20e9cc5d702cca611c6d794412460e8dc6f4dc7453ff5059d03566bf754215\""
	Jun 30 14:20:54 addons-412730 containerd[860]: time="2025-06-30T14:20:54.151082432Z" level=info msg="TearDown network for sandbox \"ae20e9cc5d702cca611c6d794412460e8dc6f4dc7453ff5059d03566bf754215\" successfully"
	Jun 30 14:20:54 addons-412730 containerd[860]: time="2025-06-30T14:20:54.160055192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae20e9cc5d702cca611c6d794412460e8dc6f4dc7453ff5059d03566bf754215\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jun 30 14:20:54 addons-412730 containerd[860]: time="2025-06-30T14:20:54.160272544Z" level=info msg="RemovePodSandbox \"ae20e9cc5d702cca611c6d794412460e8dc6f4dc7453ff5059d03566bf754215\" returns successfully"
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.214770274Z" level=info msg="Kill container \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\""
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.263131744Z" level=info msg="shim disconnected" id=9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29 namespace=k8s.io
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.263252197Z" level=warning msg="cleaning up after shim disconnected" id=9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29 namespace=k8s.io
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.263297210Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.287980181Z" level=info msg="StopContainer for \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\" returns successfully"
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.288626393Z" level=info msg="StopPodSandbox for \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\""
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.288688900Z" level=info msg="Container to stop \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.331417541Z" level=info msg="shim disconnected" id=115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f namespace=k8s.io
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.331878300Z" level=warning msg="cleaning up after shim disconnected" id=115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f namespace=k8s.io
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.331891624Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.416693538Z" level=info msg="TearDown network for sandbox \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\" successfully"
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.416781510Z" level=info msg="StopPodSandbox for \"115dda0086b6d40fc45e868b144bff58d6c53f428dcf4b5330e55951b2e5ff8f\" returns successfully"
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.734683913Z" level=info msg="RemoveContainer for \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\""
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.742530224Z" level=info msg="RemoveContainer for \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\" returns successfully"
	Jun 30 14:20:55 addons-412730 containerd[860]: time="2025-06-30T14:20:55.743908788Z" level=error msg="ContainerStatus for \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\": not found"
	Jun 30 14:20:58 addons-412730 containerd[860]: time="2025-06-30T14:20:58.444742682Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Jun 30 14:20:58 addons-412730 containerd[860]: time="2025-06-30T14:20:58.448584008Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:20:58 addons-412730 containerd[860]: time="2025-06-30T14:20:58.535989317Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:20:58 addons-412730 containerd[860]: time="2025-06-30T14:20:58.660574856Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:20:58 addons-412730 containerd[860]: time="2025-06-30T14:20:58.660700669Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	
	
	==> coredns [8e1e019f61b2004e8815ddbaf9eb6f733467fc8a79bd77196bc0c76b85b8b99c] <==
	[INFO] 10.244.0.7:37816 - 48483 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00020548s
	[INFO] 10.244.0.7:37816 - 18283 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000160064s
	[INFO] 10.244.0.7:37816 - 57759 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000505163s
	[INFO] 10.244.0.7:37816 - 2367 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000121216s
	[INFO] 10.244.0.7:37816 - 32941 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000407687s
	[INFO] 10.244.0.7:37816 - 38124 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00021235s
	[INFO] 10.244.0.7:37816 - 42370 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000448784s
	[INFO] 10.244.0.7:49788 - 53103 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191609s
	[INFO] 10.244.0.7:49788 - 52743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161724s
	[INFO] 10.244.0.7:59007 - 35302 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000389724s
	[INFO] 10.244.0.7:59007 - 35035 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000520532s
	[INFO] 10.244.0.7:46728 - 65447 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133644s
	[INFO] 10.244.0.7:46728 - 65148 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00061652s
	[INFO] 10.244.0.7:50533 - 14727 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000567642s
	[INFO] 10.244.0.7:50533 - 14481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000783618s
	[INFO] 10.244.0.27:51053 - 48711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000523898s
	[INFO] 10.244.0.27:40917 - 60785 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000642215s
	[INFO] 10.244.0.27:35189 - 63805 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096026s
	[INFO] 10.244.0.27:43478 - 6990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00040325s
	[INFO] 10.244.0.27:53994 - 15788 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170635s
	[INFO] 10.244.0.27:51155 - 39553 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128149s
	[INFO] 10.244.0.27:37346 - 35756 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001274741s
	[INFO] 10.244.0.27:38294 - 56651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000805113s
	[INFO] 10.244.0.31:54260 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000711267s
	[INFO] 10.244.0.31:46467 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124471s
	
	
	==> describe nodes <==
	Name:               addons-412730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-412730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-412730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-412730
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-412730"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:06:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-412730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:20:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:20:29 +0000   Mon, 30 Jun 2025 14:06:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    addons-412730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc9448cb8b5448fc9151301fb29bc0cd
	  System UUID:                bc9448cb-8b54-48fc-9151-301fb29bc0cd
	  Boot ID:                    6141a1b2-f9ea-4f8f-bc9e-ef270348f968
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  ingress-nginx               ingress-nginx-controller-67687b59dd-vvcrv    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 amd-gpu-device-plugin-jk4pf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-674b8bbfcf-55nn4                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-z9jlw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-412730                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-412730                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-412730        200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-mgntr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-412730                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-68b874b76f-pn4tl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-68b874b76f-v6vkl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m                kubelet          Node addons-412730 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node addons-412730 event: Registered Node addons-412730 in Controller
	
	
	==> dmesg <==
	[  +4.862777] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.721987] kauditd_printk_skb: 3 callbacks suppressed
	[  +3.179109] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.932449] kauditd_printk_skb: 47 callbacks suppressed
	[  +4.007047] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.735579] kauditd_printk_skb: 26 callbacks suppressed
	[Jun30 14:08] kauditd_printk_skb: 76 callbacks suppressed
	[  +4.704545] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.836614] kauditd_printk_skb: 61 callbacks suppressed
	[Jun30 14:09] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:10] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:13] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:14] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 19 callbacks suppressed
	[ +11.983780] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.925929] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:15] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.009854] kauditd_printk_skb: 28 callbacks suppressed
	[  +1.375797] kauditd_printk_skb: 61 callbacks suppressed
	[  +3.058612] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.836555] kauditd_printk_skb: 9 callbacks suppressed
	[Jun30 14:17] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:19] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:20] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [0aa8fdef5106381a33bf7fae10904caa793ace481cae1d43127914ffe86d49ff] <==
	{"level":"warn","ts":"2025-06-30T14:07:49.751637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.210142ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:49.751838Z","caller":"traceutil/trace.go:171","msg":"trace[1184992035] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1203; }","duration":"187.410383ms","start":"2025-06-30T14:07:49.564417Z","end":"2025-06-30T14:07:49.751827Z","steps":["trace[1184992035] 'agreement among raft nodes before linearized reading'  (duration: 187.200791ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:49.752758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.403506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:49.751590Z","caller":"traceutil/trace.go:171","msg":"trace[559772973] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"267.154952ms","start":"2025-06-30T14:07:49.483661Z","end":"2025-06-30T14:07:49.750816Z","steps":["trace[559772973] 'process raft request'  (duration: 266.932951ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:07:49.752866Z","caller":"traceutil/trace.go:171","msg":"trace[154741241] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1203; }","duration":"176.571713ms","start":"2025-06-30T14:07:49.576287Z","end":"2025-06-30T14:07:49.752858Z","steps":["trace[154741241] 'agreement among raft nodes before linearized reading'  (duration: 176.438082ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.060101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.201972ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156627244712664246 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" value_size:707 lease:3156627244712664233 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-06-30T14:07:51.060508Z","caller":"traceutil/trace.go:171","msg":"trace[1403560008] linearizableReadLoop","detail":"{readStateIndex:1246; appliedIndex:1245; }","duration":"269.602891ms","start":"2025-06-30T14:07:50.790891Z","end":"2025-06-30T14:07:51.060494Z","steps":["trace[1403560008] 'read index received'  (duration: 53.900301ms)","trace[1403560008] 'applied index is now lower than readState.Index'  (duration: 215.701517ms)"],"step_count":2}
	{"level":"info","ts":"2025-06-30T14:07:51.060687Z","caller":"traceutil/trace.go:171","msg":"trace[1928328932] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"282.940847ms","start":"2025-06-30T14:07:50.777737Z","end":"2025-06-30T14:07:51.060678Z","steps":["trace[1928328932] 'process raft request'  (duration: 67.101901ms)","trace[1928328932] 'compare'  (duration: 214.876695ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T14:07:51.060917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.674634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.060970Z","caller":"traceutil/trace.go:171","msg":"trace[1908369901] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1214; }","duration":"254.762861ms","start":"2025-06-30T14:07:50.806198Z","end":"2025-06-30T14:07:51.060961Z","steps":["trace[1908369901] 'agreement among raft nodes before linearized reading'  (duration: 254.494296ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.462832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb\" limit:1 ","response":"range_response_count:1 size:4215"}
	{"level":"info","ts":"2025-06-30T14:07:51.061377Z","caller":"traceutil/trace.go:171","msg":"trace[1518962383] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb; range_end:; response_count:1; response_revision:1214; }","duration":"270.575777ms","start":"2025-06-30T14:07:50.790792Z","end":"2025-06-30T14:07:51.061368Z","steps":["trace[1518962383] 'agreement among raft nodes before linearized reading'  (duration: 270.487611ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.960425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.062418Z","caller":"traceutil/trace.go:171","msg":"trace[621823114] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1214; }","duration":"205.559852ms","start":"2025-06-30T14:07:50.856769Z","end":"2025-06-30T14:07:51.062329Z","steps":["trace[621823114] 'agreement among raft nodes before linearized reading'  (duration: 204.992694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:55.431218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.529916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:55.431286Z","caller":"traceutil/trace.go:171","msg":"trace[1840291804] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1254; }","duration":"185.638229ms","start":"2025-06-30T14:07:55.245637Z","end":"2025-06-30T14:07:55.431275Z","steps":["trace[1840291804] 'count revisions from in-memory index tree'  (duration: 185.483282ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.760814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.563816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.761810Z","caller":"traceutil/trace.go:171","msg":"trace[1037456471] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"232.616347ms","start":"2025-06-30T14:07:59.529177Z","end":"2025-06-30T14:07:59.761793Z","steps":["trace[1037456471] 'range keys from in-memory index tree'  (duration: 231.18055ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.762324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.982539ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.762383Z","caller":"traceutil/trace.go:171","msg":"trace[856262130] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1289; }","duration":"197.052432ms","start":"2025-06-30T14:07:59.565321Z","end":"2025-06-30T14:07:59.762373Z","steps":["trace[856262130] 'range keys from in-memory index tree'  (duration: 196.924905ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.767749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.524873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.767792Z","caller":"traceutil/trace.go:171","msg":"trace[2033650698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"189.645425ms","start":"2025-06-30T14:07:59.578136Z","end":"2025-06-30T14:07:59.767782Z","steps":["trace[2033650698] 'range keys from in-memory index tree'  (duration: 183.005147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:16:47.709200Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1900}
	{"level":"info","ts":"2025-06-30T14:16:47.874708Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1900,"took":"164.330155ms","hash":2534900505,"current-db-size-bytes":12238848,"current-db-size":"12 MB","current-db-size-in-use-bytes":7974912,"current-db-size-in-use":"8.0 MB"}
	{"level":"info","ts":"2025-06-30T14:16:47.875273Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2534900505,"revision":1900,"compact-revision":-1}
	
	
	==> kernel <==
	 14:21:07 up 14 min,  0 users,  load average: 0.28, 0.35, 0.39
	Linux addons-412730 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0f5bd8617276d56b4d1c938db3290f5057a6076ca2a1ff6b72007428d9958a0f] <==
	I0630 14:14:29.388938       1 handler.go:288] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0630 14:14:29.869256       1 cacher.go:183] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0630 14:14:30.002718       1 cacher.go:183] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0630 14:14:30.088081       1 cacher.go:183] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0630 14:14:30.129186       1 cacher.go:183] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0630 14:14:30.136170       1 cacher.go:183] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0630 14:14:30.389854       1 cacher.go:183] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0630 14:14:30.736396       1 cacher.go:183] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0630 14:14:48.025746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.114:8443->192.168.39.1:41032: use of closed network connection
	E0630 14:14:48.212301       1 conn.go:339] Error on socket receive: read tcp 192.168.39.114:8443->192.168.39.1:41066: use of closed network connection
	I0630 14:14:51.319634       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:14:57.554271       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.103.203"}
	I0630 14:14:57.570112       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:03.599033       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:08.183782       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:11.441632       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:11.868485       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:15:12.083379       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.15.45"}
	I0630 14:15:12.087255       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:16.776061       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:19.939310       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0630 14:15:20.985204       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0630 14:15:31.545392       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:42.030628       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:16:49.559945       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ed722ba732c0211e772331fd643a8e48e5ef0b8cd4b82f97d3a5d69b9aa30756] <==
	E0630 14:19:28.405122       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:28.635725       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:30.008801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:33.257901       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:46.062658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:46.090129       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:59.503204       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:03.629652       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:03.979647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:04.326396       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:16.048961       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:17.795731       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:24.982250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:26.506509       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:20:31.082605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:36.666110       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:41.507302       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:20:42.047027       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:49.561098       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:51.690627       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:53.933767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:56.208251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:56.322223       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:56.508756       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0630 14:21:05.416574       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [e9d272ef95cc8f73e12d5cc59f4966731013d924126fc8eb0bd96e6acc623f27] <==
	E0630 14:06:58.349607       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:06:58.396678       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E0630 14:06:58.396782       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:06:58.682235       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:06:58.682289       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:06:58.682317       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:06:58.729336       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:06:58.729702       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:06:58.729714       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:06:58.747265       1 config.go:199] "Starting service config controller"
	I0630 14:06:58.747303       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:06:58.747324       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:06:58.747328       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:06:58.747339       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:06:58.747342       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:06:58.747357       1 config.go:329] "Starting node config controller"
	I0630 14:06:58.747360       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:06:58.847644       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 14:06:58.847708       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:06:58.847734       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:06:58.848003       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cda40c61e5780477d5a234f04d425f2347a784973443632c68938aea16f474e6] <==
	E0630 14:06:49.633867       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:06:49.633920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:06:49.634247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:49.636896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:49.637563       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:49.637783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:06:49.638039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:49.638190       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:49.638365       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:06:49.638496       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:06:49.638609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:06:49.638719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:49.638999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.551259       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.618504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:50.628999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 14:06:50.679571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:50.702747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:06:50.708224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:50.796622       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:06:50.797647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:50.806980       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:50.808489       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:50.967143       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0630 14:06:53.415169       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:20:31 addons-412730 kubelet[1571]: E0630 14:20:31.443926    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:20:32 addons-412730 kubelet[1571]: I0630 14:20:32.445979    1571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df93f77-f330-4c94-9458-069c8cba79a5" path="/var/lib/kubelet/pods/2df93f77-f330-4c94-9458-069c8cba79a5/volumes"
	Jun 30 14:20:36 addons-412730 kubelet[1571]: E0630 14:20:36.445295    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:20:45 addons-412730 kubelet[1571]: I0630 14:20:45.442381    1571 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jun 30 14:20:45 addons-412730 kubelet[1571]: E0630 14:20:45.444095    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:20:47 addons-412730 kubelet[1571]: E0630 14:20:47.661539    1571 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:20:47 addons-412730 kubelet[1571]: E0630 14:20:47.661597    1571 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Jun 30 14:20:47 addons-412730 kubelet[1571]: E0630 14:20:47.661777    1571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgbht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(c47e35d5-df9f-4a6a-a3bf-87072a4de2a0): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:20:47 addons-412730 kubelet[1571]: E0630 14:20:47.663212    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.583623    1571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh97q\" (UniqueName: \"kubernetes.io/projected/8c1e3838-f22c-49f0-80a2-7d48bb50fbab-kube-api-access-gh97q\") pod \"8c1e3838-f22c-49f0-80a2-7d48bb50fbab\" (UID: \"8c1e3838-f22c-49f0-80a2-7d48bb50fbab\") "
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.583687    1571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1e3838-f22c-49f0-80a2-7d48bb50fbab-config-volume\") pod \"8c1e3838-f22c-49f0-80a2-7d48bb50fbab\" (UID: \"8c1e3838-f22c-49f0-80a2-7d48bb50fbab\") "
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.584225    1571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1e3838-f22c-49f0-80a2-7d48bb50fbab-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c1e3838-f22c-49f0-80a2-7d48bb50fbab" (UID: "8c1e3838-f22c-49f0-80a2-7d48bb50fbab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.589284    1571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1e3838-f22c-49f0-80a2-7d48bb50fbab-kube-api-access-gh97q" (OuterVolumeSpecName: "kube-api-access-gh97q") pod "8c1e3838-f22c-49f0-80a2-7d48bb50fbab" (UID: "8c1e3838-f22c-49f0-80a2-7d48bb50fbab"). InnerVolumeSpecName "kube-api-access-gh97q". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.684210    1571 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c1e3838-f22c-49f0-80a2-7d48bb50fbab-config-volume\") on node \"addons-412730\" DevicePath \"\""
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.684350    1571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gh97q\" (UniqueName: \"kubernetes.io/projected/8c1e3838-f22c-49f0-80a2-7d48bb50fbab-kube-api-access-gh97q\") on node \"addons-412730\" DevicePath \"\""
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.733244    1571 scope.go:117] "RemoveContainer" containerID="9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29"
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.743600    1571 scope.go:117] "RemoveContainer" containerID="9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29"
	Jun 30 14:20:55 addons-412730 kubelet[1571]: E0630 14:20:55.744607    1571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\": not found" containerID="9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29"
	Jun 30 14:20:55 addons-412730 kubelet[1571]: I0630 14:20:55.744670    1571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29"} err="failed to get container status \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d1dce2bd3c5f36dccfa7534cc72e3b480f6439c231c579bf36d9953c3421b29\": not found"
	Jun 30 14:20:56 addons-412730 kubelet[1571]: I0630 14:20:56.444895    1571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1e3838-f22c-49f0-80a2-7d48bb50fbab" path="/var/lib/kubelet/pods/8c1e3838-f22c-49f0-80a2-7d48bb50fbab/volumes"
	Jun 30 14:20:58 addons-412730 kubelet[1571]: E0630 14:20:58.661032    1571 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Jun 30 14:20:58 addons-412730 kubelet[1571]: E0630 14:20:58.661216    1571 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Jun 30 14:20:58 addons-412730 kubelet[1571]: E0630 14:20:58.662586    1571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tpjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(64454ac4-31e6-4e37-95db-f9dbfdbc92c3): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:20:58 addons-412730 kubelet[1571]: E0630 14:20:58.663954    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:20:59 addons-412730 kubelet[1571]: E0630 14:20:59.443099    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	
	
	==> storage-provisioner [60e507365f1d30c7beac2979b93ea374fc72f0bcfb17244185c70d7ea0c4da2b] <==
	W0630 14:20:42.392101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:44.395669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:44.403363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:46.406814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:46.411990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:48.414914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:48.420073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:50.423061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:50.428903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:52.432071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:52.437755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:54.442554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:54.451616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:56.455610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:56.461611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:58.466368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:58.473317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:00.477593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:00.486560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:02.489771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:02.495491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:04.499068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:04.507775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:06.512000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:21:06.520868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412730 -n addons-412730
helpers_test.go:261: (dbg) Run:  kubectl --context addons-412730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb: exit status 1 (87.572693ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-412730/192.168.39.114
	Start Time:       Mon, 30 Jun 2025 14:15:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpjf9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tpjf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m56s                 default-scheduler  Successfully assigned default/nginx to addons-412730
	  Warning  Failed     5m56s                 kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m1s (x5 over 5m56s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m1s (x5 over 5m56s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m1s (x4 over 5m40s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     52s (x20 over 5m55s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    37s (x21 over 5m55s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-412730/192.168.39.114
	Start Time:       Mon, 30 Jun 2025 14:15:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgbht (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-vgbht:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-412730
	  Normal   Pulling    3m10s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m10s (x5 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m10s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     54s (x20 over 6m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    43s (x21 over 6m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmb4n (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jmb4n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gpszb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fl6cb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.929331477s)
--- FAIL: TestAddons/parallel/CSI (379.96s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-412730 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-412730 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412730 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-412730 -n addons-412730
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 logs -n 25: (1.389559262s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-083943              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | -o=json --download-only              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | -p download-only-480082              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-083943              | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-480082              | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | --download-only -p                   | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | binary-mirror-278166                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42597               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-278166              | binary-mirror-278166 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| addons  | disable dashboard -p                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | addons-412730                        |                      |         |         |                     |                     |
	| start   | -p addons-412730 --wait=true         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:08 UTC |
	|         | --memory=4096 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=registry-creds              |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:14 UTC | 30 Jun 25 14:14 UTC |
	|         | -p addons-412730                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-412730 ip                     | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-412730 addons disable         | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | configure registry-creds -f          | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | ./testdata/addons_testconfig.json    |                      |         |         |                     |                     |
	|         | -p addons-412730                     |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable registry-creds               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-412730 addons                 | addons-412730        | jenkins | v1.36.0 | 30 Jun 25 14:15 UTC | 30 Jun 25 14:15 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:06:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:06:06.240063 1460091 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:06:06.240209 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240221 1460091 out.go:358] Setting ErrFile to fd 2...
	I0630 14:06:06.240225 1460091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:06.240435 1460091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:06:06.241146 1460091 out.go:352] Setting JSON to false
	I0630 14:06:06.242162 1460091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49689,"bootTime":1751242677,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:06:06.242287 1460091 start.go:140] virtualization: kvm guest
	I0630 14:06:06.244153 1460091 out.go:177] * [addons-412730] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:06:06.245583 1460091 notify.go:220] Checking for updates...
	I0630 14:06:06.245617 1460091 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:06:06.246864 1460091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:06:06.248249 1460091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:06.249601 1460091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.251003 1460091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:06:06.252187 1460091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:06:06.253562 1460091 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:06:06.289858 1460091 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 14:06:06.291153 1460091 start.go:304] selected driver: kvm2
	I0630 14:06:06.291176 1460091 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:06:06.291195 1460091 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:06:06.292048 1460091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.292142 1460091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1452140/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0630 14:06:06.309060 1460091 install.go:137] /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0630 14:06:06.309119 1460091 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:06:06.309429 1460091 start_flags.go:990] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:06:06.309479 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:06.309532 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:06.309546 1460091 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:06:06.309617 1460091 start.go:347] cluster config:
	{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0630 14:06:06.309739 1460091 iso.go:125] acquiring lock: {Name:mk3f178100d94eda06013511859d36adab64257f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:06:06.311683 1460091 out.go:177] * Starting "addons-412730" primary control-plane node in "addons-412730" cluster
	I0630 14:06:06.313225 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:06.313276 1460091 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4
	I0630 14:06:06.313292 1460091 cache.go:56] Caching tarball of preloaded images
	I0630 14:06:06.313420 1460091 preload.go:172] Found /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0630 14:06:06.313435 1460091 cache.go:59] Finished verifying existence of preloaded tar for v1.33.2 on containerd
	I0630 14:06:06.313766 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:06.313798 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json: {Name:mk9a7a41f109a1f3b7b9e5a38a0e2a1bce3a8d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:06.313975 1460091 start.go:360] acquireMachinesLock for addons-412730: {Name:mkb4b5035f5dd19ed6df4556a284e7c795570454 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0630 14:06:06.314058 1460091 start.go:364] duration metric: took 65.368µs to acquireMachinesLock for "addons-412730"
	I0630 14:06:06.314084 1460091 start.go:93] Provisioning new machine with config: &{Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:06.314172 1460091 start.go:125] createHost starting for "" (driver="kvm2")
	I0630 14:06:06.316769 1460091 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0630 14:06:06.316975 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:06.317044 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:06.332767 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0630 14:06:06.333480 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:06.334061 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:06.334083 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:06.334504 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:06.334801 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:06.335019 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:06.335217 1460091 start.go:159] libmachine.API.Create for "addons-412730" (driver="kvm2")
	I0630 14:06:06.335248 1460091 client.go:168] LocalClient.Create starting
	I0630 14:06:06.335289 1460091 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem
	I0630 14:06:06.483712 1460091 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem
	I0630 14:06:06.592251 1460091 main.go:141] libmachine: Running pre-create checks...
	I0630 14:06:06.592287 1460091 main.go:141] libmachine: (addons-412730) Calling .PreCreateCheck
	I0630 14:06:06.592947 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:06.593668 1460091 main.go:141] libmachine: Creating machine...
	I0630 14:06:06.593697 1460091 main.go:141] libmachine: (addons-412730) Calling .Create
	I0630 14:06:06.594139 1460091 main.go:141] libmachine: (addons-412730) creating KVM machine...
	I0630 14:06:06.594168 1460091 main.go:141] libmachine: (addons-412730) creating network...
	I0630 14:06:06.595936 1460091 main.go:141] libmachine: (addons-412730) DBG | found existing default KVM network
	I0630 14:06:06.596779 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.596550 1460113 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020ef20}
	I0630 14:06:06.596808 1460091 main.go:141] libmachine: (addons-412730) DBG | created network xml: 
	I0630 14:06:06.596818 1460091 main.go:141] libmachine: (addons-412730) DBG | <network>
	I0630 14:06:06.596822 1460091 main.go:141] libmachine: (addons-412730) DBG |   <name>mk-addons-412730</name>
	I0630 14:06:06.596828 1460091 main.go:141] libmachine: (addons-412730) DBG |   <dns enable='no'/>
	I0630 14:06:06.596832 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596839 1460091 main.go:141] libmachine: (addons-412730) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0630 14:06:06.596851 1460091 main.go:141] libmachine: (addons-412730) DBG |     <dhcp>
	I0630 14:06:06.596865 1460091 main.go:141] libmachine: (addons-412730) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0630 14:06:06.596872 1460091 main.go:141] libmachine: (addons-412730) DBG |     </dhcp>
	I0630 14:06:06.596877 1460091 main.go:141] libmachine: (addons-412730) DBG |   </ip>
	I0630 14:06:06.596883 1460091 main.go:141] libmachine: (addons-412730) DBG |   
	I0630 14:06:06.596888 1460091 main.go:141] libmachine: (addons-412730) DBG | </network>
	I0630 14:06:06.596897 1460091 main.go:141] libmachine: (addons-412730) DBG | 
	I0630 14:06:06.602938 1460091 main.go:141] libmachine: (addons-412730) DBG | trying to create private KVM network mk-addons-412730 192.168.39.0/24...
	I0630 14:06:06.682845 1460091 main.go:141] libmachine: (addons-412730) DBG | private KVM network mk-addons-412730 192.168.39.0/24 created
	I0630 14:06:06.682892 1460091 main.go:141] libmachine: (addons-412730) setting up store path in /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:06.682905 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.682807 1460113 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:06.682951 1460091 main.go:141] libmachine: (addons-412730) building disk image from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	I0630 14:06:06.682983 1460091 main.go:141] libmachine: (addons-412730) Downloading /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso...
	I0630 14:06:06.983317 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:06.983139 1460113 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa...
	I0630 14:06:07.030013 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029839 1460113 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk...
	I0630 14:06:07.030043 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing magic tar header
	I0630 14:06:07.030053 1460091 main.go:141] libmachine: (addons-412730) DBG | Writing SSH key tar header
	I0630 14:06:07.030061 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:07.029966 1460113 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 ...
	I0630 14:06:07.030071 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730
	I0630 14:06:07.030150 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730 (perms=drwx------)
	I0630 14:06:07.030175 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube/machines (perms=drwxr-xr-x)
	I0630 14:06:07.030186 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines
	I0630 14:06:07.030199 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140/.minikube (perms=drwxr-xr-x)
	I0630 14:06:07.030230 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration/20991-1452140 (perms=drwxrwxr-x)
	I0630 14:06:07.030243 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0630 14:06:07.030249 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:07.030257 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20991-1452140
	I0630 14:06:07.030272 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0630 14:06:07.030284 1460091 main.go:141] libmachine: (addons-412730) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0630 14:06:07.030316 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home/jenkins
	I0630 14:06:07.030332 1460091 main.go:141] libmachine: (addons-412730) DBG | checking permissions on dir: /home
	I0630 14:06:07.030374 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:07.030392 1460091 main.go:141] libmachine: (addons-412730) DBG | skipping /home - not owner
	I0630 14:06:07.031398 1460091 main.go:141] libmachine: (addons-412730) define libvirt domain using xml: 
	I0630 14:06:07.031420 1460091 main.go:141] libmachine: (addons-412730) <domain type='kvm'>
	I0630 14:06:07.031429 1460091 main.go:141] libmachine: (addons-412730)   <name>addons-412730</name>
	I0630 14:06:07.031435 1460091 main.go:141] libmachine: (addons-412730)   <memory unit='MiB'>4096</memory>
	I0630 14:06:07.031443 1460091 main.go:141] libmachine: (addons-412730)   <vcpu>2</vcpu>
	I0630 14:06:07.031449 1460091 main.go:141] libmachine: (addons-412730)   <features>
	I0630 14:06:07.031457 1460091 main.go:141] libmachine: (addons-412730)     <acpi/>
	I0630 14:06:07.031472 1460091 main.go:141] libmachine: (addons-412730)     <apic/>
	I0630 14:06:07.031484 1460091 main.go:141] libmachine: (addons-412730)     <pae/>
	I0630 14:06:07.031495 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.031506 1460091 main.go:141] libmachine: (addons-412730)   </features>
	I0630 14:06:07.031515 1460091 main.go:141] libmachine: (addons-412730)   <cpu mode='host-passthrough'>
	I0630 14:06:07.031524 1460091 main.go:141] libmachine: (addons-412730)   
	I0630 14:06:07.031534 1460091 main.go:141] libmachine: (addons-412730)   </cpu>
	I0630 14:06:07.031544 1460091 main.go:141] libmachine: (addons-412730)   <os>
	I0630 14:06:07.031554 1460091 main.go:141] libmachine: (addons-412730)     <type>hvm</type>
	I0630 14:06:07.031563 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='cdrom'/>
	I0630 14:06:07.031572 1460091 main.go:141] libmachine: (addons-412730)     <boot dev='hd'/>
	I0630 14:06:07.031581 1460091 main.go:141] libmachine: (addons-412730)     <bootmenu enable='no'/>
	I0630 14:06:07.031597 1460091 main.go:141] libmachine: (addons-412730)   </os>
	I0630 14:06:07.031609 1460091 main.go:141] libmachine: (addons-412730)   <devices>
	I0630 14:06:07.031619 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='cdrom'>
	I0630 14:06:07.031636 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/boot2docker.iso'/>
	I0630 14:06:07.031647 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hdc' bus='scsi'/>
	I0630 14:06:07.031659 1460091 main.go:141] libmachine: (addons-412730)       <readonly/>
	I0630 14:06:07.031667 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031679 1460091 main.go:141] libmachine: (addons-412730)     <disk type='file' device='disk'>
	I0630 14:06:07.031689 1460091 main.go:141] libmachine: (addons-412730)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0630 14:06:07.031737 1460091 main.go:141] libmachine: (addons-412730)       <source file='/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/addons-412730.rawdisk'/>
	I0630 14:06:07.031764 1460091 main.go:141] libmachine: (addons-412730)       <target dev='hda' bus='virtio'/>
	I0630 14:06:07.031774 1460091 main.go:141] libmachine: (addons-412730)     </disk>
	I0630 14:06:07.031792 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031805 1460091 main.go:141] libmachine: (addons-412730)       <source network='mk-addons-412730'/>
	I0630 14:06:07.031820 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031854 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031878 1460091 main.go:141] libmachine: (addons-412730)     <interface type='network'>
	I0630 14:06:07.031890 1460091 main.go:141] libmachine: (addons-412730)       <source network='default'/>
	I0630 14:06:07.031901 1460091 main.go:141] libmachine: (addons-412730)       <model type='virtio'/>
	I0630 14:06:07.031909 1460091 main.go:141] libmachine: (addons-412730)     </interface>
	I0630 14:06:07.031919 1460091 main.go:141] libmachine: (addons-412730)     <serial type='pty'>
	I0630 14:06:07.031927 1460091 main.go:141] libmachine: (addons-412730)       <target port='0'/>
	I0630 14:06:07.031942 1460091 main.go:141] libmachine: (addons-412730)     </serial>
	I0630 14:06:07.031951 1460091 main.go:141] libmachine: (addons-412730)     <console type='pty'>
	I0630 14:06:07.031964 1460091 main.go:141] libmachine: (addons-412730)       <target type='serial' port='0'/>
	I0630 14:06:07.031975 1460091 main.go:141] libmachine: (addons-412730)     </console>
	I0630 14:06:07.031982 1460091 main.go:141] libmachine: (addons-412730)     <rng model='virtio'>
	I0630 14:06:07.031995 1460091 main.go:141] libmachine: (addons-412730)       <backend model='random'>/dev/random</backend>
	I0630 14:06:07.032001 1460091 main.go:141] libmachine: (addons-412730)     </rng>
	I0630 14:06:07.032011 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032016 1460091 main.go:141] libmachine: (addons-412730)     
	I0630 14:06:07.032026 1460091 main.go:141] libmachine: (addons-412730)   </devices>
	I0630 14:06:07.032034 1460091 main.go:141] libmachine: (addons-412730) </domain>
	I0630 14:06:07.032066 1460091 main.go:141] libmachine: (addons-412730) 
	I0630 14:06:07.037044 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:0d:7b:07 in network default
	I0630 14:06:07.037851 1460091 main.go:141] libmachine: (addons-412730) starting domain...
	I0630 14:06:07.037899 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:07.037908 1460091 main.go:141] libmachine: (addons-412730) ensuring networks are active...
	I0630 14:06:07.038725 1460091 main.go:141] libmachine: (addons-412730) Ensuring network default is active
	I0630 14:06:07.039106 1460091 main.go:141] libmachine: (addons-412730) Ensuring network mk-addons-412730 is active
	I0630 14:06:07.039715 1460091 main.go:141] libmachine: (addons-412730) getting domain XML...
	I0630 14:06:07.040672 1460091 main.go:141] libmachine: (addons-412730) creating domain...
	I0630 14:06:08.319736 1460091 main.go:141] libmachine: (addons-412730) waiting for IP...
	I0630 14:06:08.320757 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.321298 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.321358 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.321305 1460113 retry.go:31] will retry after 217.608702ms: waiting for domain to come up
	I0630 14:06:08.541088 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.541707 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.541732 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.541668 1460113 retry.go:31] will retry after 322.22603ms: waiting for domain to come up
	I0630 14:06:08.865505 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:08.865965 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:08.865994 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:08.865925 1460113 retry.go:31] will retry after 339.049792ms: waiting for domain to come up
	I0630 14:06:09.206655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.207155 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.207213 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.207148 1460113 retry.go:31] will retry after 478.054487ms: waiting for domain to come up
	I0630 14:06:09.686885 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:09.687397 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:09.687426 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:09.687347 1460113 retry.go:31] will retry after 663.338232ms: waiting for domain to come up
	I0630 14:06:10.352433 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:10.352917 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:10.352942 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:10.352876 1460113 retry.go:31] will retry after 824.880201ms: waiting for domain to come up
	I0630 14:06:11.179557 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:11.180050 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:11.180081 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:11.180000 1460113 retry.go:31] will retry after 1.072535099s: waiting for domain to come up
	I0630 14:06:12.253993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:12.254526 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:12.254560 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:12.254433 1460113 retry.go:31] will retry after 1.120902402s: waiting for domain to come up
	I0630 14:06:13.376695 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:13.377283 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:13.377315 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:13.377244 1460113 retry.go:31] will retry after 1.419759095s: waiting for domain to come up
	I0630 14:06:14.799069 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:14.799546 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:14.799574 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:14.799514 1460113 retry.go:31] will retry after 1.843918596s: waiting for domain to come up
	I0630 14:06:16.645512 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:16.646025 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:16.646082 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:16.646003 1460113 retry.go:31] will retry after 2.785739179s: waiting for domain to come up
	I0630 14:06:19.434426 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:19.435055 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:19.435086 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:19.434987 1460113 retry.go:31] will retry after 2.736128675s: waiting for domain to come up
	I0630 14:06:22.172470 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:22.173071 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:22.173092 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:22.173042 1460113 retry.go:31] will retry after 3.042875133s: waiting for domain to come up
	I0630 14:06:25.219310 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:25.219910 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find current IP address of domain addons-412730 in network mk-addons-412730
	I0630 14:06:25.219934 1460091 main.go:141] libmachine: (addons-412730) DBG | I0630 14:06:25.219869 1460113 retry.go:31] will retry after 4.255226103s: waiting for domain to come up
	I0630 14:06:29.478898 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479625 1460091 main.go:141] libmachine: (addons-412730) found domain IP: 192.168.39.114
	I0630 14:06:29.479653 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has current primary IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.479661 1460091 main.go:141] libmachine: (addons-412730) reserving static IP address...
	I0630 14:06:29.480160 1460091 main.go:141] libmachine: (addons-412730) DBG | unable to find host DHCP lease matching {name: "addons-412730", mac: "52:54:00:ac:59:ff", ip: "192.168.39.114"} in network mk-addons-412730
	I0630 14:06:29.563376 1460091 main.go:141] libmachine: (addons-412730) reserved static IP address 192.168.39.114 for domain addons-412730
	I0630 14:06:29.563409 1460091 main.go:141] libmachine: (addons-412730) waiting for SSH...
	I0630 14:06:29.563418 1460091 main.go:141] libmachine: (addons-412730) DBG | Getting to WaitForSSH function...
	I0630 14:06:29.566605 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567079 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.567114 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.567268 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH client type: external
	I0630 14:06:29.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Using SSH private key: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa (-rw-------)
	I0630 14:06:29.567351 1460091 main.go:141] libmachine: (addons-412730) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0630 14:06:29.567371 1460091 main.go:141] libmachine: (addons-412730) DBG | About to run SSH command:
	I0630 14:06:29.567386 1460091 main.go:141] libmachine: (addons-412730) DBG | exit 0
	I0630 14:06:29.697378 1460091 main.go:141] libmachine: (addons-412730) DBG | SSH cmd err, output: <nil>: 
	I0630 14:06:29.697644 1460091 main.go:141] libmachine: (addons-412730) KVM machine creation complete
	I0630 14:06:29.698028 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:29.698656 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.698905 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:29.699080 1460091 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0630 14:06:29.699098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:29.700512 1460091 main.go:141] libmachine: Detecting operating system of created instance...
	I0630 14:06:29.700530 1460091 main.go:141] libmachine: Waiting for SSH to be available...
	I0630 14:06:29.700538 1460091 main.go:141] libmachine: Getting to WaitForSSH function...
	I0630 14:06:29.700545 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.702878 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703363 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.703393 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.703678 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.703917 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704093 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.704253 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.704472 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.704757 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.704772 1460091 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0630 14:06:29.825352 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:29.825394 1460091 main.go:141] libmachine: Detecting the provisioner...
	I0630 14:06:29.825405 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.828698 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829249 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.829291 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.829467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.829702 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.829910 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.830086 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.830284 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.830503 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.830515 1460091 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0630 14:06:29.950727 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0630 14:06:29.950815 1460091 main.go:141] libmachine: found compatible host: buildroot
	I0630 14:06:29.950829 1460091 main.go:141] libmachine: Provisioning with buildroot...
	I0630 14:06:29.950838 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951114 1460091 buildroot.go:166] provisioning hostname "addons-412730"
	I0630 14:06:29.951153 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:29.951406 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:29.954775 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955251 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:29.955283 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:29.955448 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:29.955676 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.955864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:29.956131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:29.956359 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:29.956598 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:29.956616 1460091 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-412730 && echo "addons-412730" | sudo tee /etc/hostname
	I0630 14:06:30.091933 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-412730
	
	I0630 14:06:30.091974 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.095576 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.095967 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.095993 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.096193 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.096420 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096640 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.096775 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.096955 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.097249 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.097278 1460091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-412730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-412730/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-412730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0630 14:06:30.228409 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0630 14:06:30.228455 1460091 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20991-1452140/.minikube CaCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20991-1452140/.minikube}
	I0630 14:06:30.228507 1460091 buildroot.go:174] setting up certificates
	I0630 14:06:30.228539 1460091 provision.go:84] configureAuth start
	I0630 14:06:30.228557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetMachineName
	I0630 14:06:30.228999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.232598 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233018 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.233052 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.233306 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.235934 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236310 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.236353 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.236511 1460091 provision.go:143] copyHostCerts
	I0630 14:06:30.236588 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.pem (1078 bytes)
	I0630 14:06:30.236717 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/cert.pem (1123 bytes)
	I0630 14:06:30.236771 1460091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20991-1452140/.minikube/key.pem (1675 bytes)
	I0630 14:06:30.236826 1460091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem org=jenkins.addons-412730 san=[127.0.0.1 192.168.39.114 addons-412730 localhost minikube]
	I0630 14:06:30.629859 1460091 provision.go:177] copyRemoteCerts
	I0630 14:06:30.629936 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0630 14:06:30.629965 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.633589 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634037 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.634067 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.634292 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.634709 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.634951 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.635149 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.732351 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0630 14:06:30.765263 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0630 14:06:30.797980 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0630 14:06:30.829589 1460091 provision.go:87] duration metric: took 601.031936ms to configureAuth
	I0630 14:06:30.829626 1460091 buildroot.go:189] setting minikube options for container-runtime
	I0630 14:06:30.829835 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:30.829875 1460091 main.go:141] libmachine: Checking connection to Docker...
	I0630 14:06:30.829891 1460091 main.go:141] libmachine: (addons-412730) Calling .GetURL
	I0630 14:06:30.831493 1460091 main.go:141] libmachine: (addons-412730) DBG | using libvirt version 6000000
	I0630 14:06:30.834168 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834575 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.834608 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.834836 1460091 main.go:141] libmachine: Docker is up and running!
	I0630 14:06:30.834858 1460091 main.go:141] libmachine: Reticulating splines...
	I0630 14:06:30.834867 1460091 client.go:171] duration metric: took 24.499610068s to LocalClient.Create
	I0630 14:06:30.834910 1460091 start.go:167] duration metric: took 24.499694666s to libmachine.API.Create "addons-412730"
	I0630 14:06:30.834925 1460091 start.go:293] postStartSetup for "addons-412730" (driver="kvm2")
	I0630 14:06:30.834938 1460091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0630 14:06:30.834971 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:30.835263 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0630 14:06:30.835291 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.837701 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838027 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.838070 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.838230 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.838425 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.838615 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.838765 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:30.930536 1460091 ssh_runner.go:195] Run: cat /etc/os-release
	I0630 14:06:30.935492 1460091 info.go:137] Remote host: Buildroot 2025.02
	I0630 14:06:30.935534 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/addons for local assets ...
	I0630 14:06:30.935631 1460091 filesync.go:126] Scanning /home/jenkins/minikube-integration/20991-1452140/.minikube/files for local assets ...
	I0630 14:06:30.935674 1460091 start.go:296] duration metric: took 100.742963ms for postStartSetup
	I0630 14:06:30.935713 1460091 main.go:141] libmachine: (addons-412730) Calling .GetConfigRaw
	I0630 14:06:30.936417 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:30.939655 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940194 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.940223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.940486 1460091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/config.json ...
	I0630 14:06:30.940676 1460091 start.go:128] duration metric: took 24.626491157s to createHost
	I0630 14:06:30.940701 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:30.943451 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.943947 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:30.943979 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:30.944167 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:30.944383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944557 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:30.944780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:30.944979 1460091 main.go:141] libmachine: Using SSH client type: native
	I0630 14:06:30.945339 1460091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83bb20] 0x83e820 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I0630 14:06:30.945363 1460091 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0630 14:06:31.062586 1460091 main.go:141] libmachine: SSH cmd err, output: <nil>: 1751292391.035640439
	
	I0630 14:06:31.062617 1460091 fix.go:216] guest clock: 1751292391.035640439
	I0630 14:06:31.062625 1460091 fix.go:229] Guest: 2025-06-30 14:06:31.035640439 +0000 UTC Remote: 2025-06-30 14:06:30.940689328 +0000 UTC m=+24.741258527 (delta=94.951111ms)
	I0630 14:06:31.062664 1460091 fix.go:200] guest clock delta is within tolerance: 94.951111ms
	I0630 14:06:31.062669 1460091 start.go:83] releasing machines lock for "addons-412730", held for 24.748599614s
	I0630 14:06:31.062697 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.063068 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:31.066256 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.066740 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.066774 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.067022 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067620 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.067907 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:31.068104 1460091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0630 14:06:31.068165 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.068221 1460091 ssh_runner.go:195] Run: cat /version.json
	I0630 14:06:31.068250 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:31.071486 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.071690 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072008 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072043 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072103 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:31.072130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:31.072204 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072375 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:31.072476 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072559 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:31.072632 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072686 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:31.072859 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.072867 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:31.159582 1460091 ssh_runner.go:195] Run: systemctl --version
	I0630 14:06:31.186817 1460091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0630 14:06:31.193553 1460091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0630 14:06:31.193649 1460091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0630 14:06:31.215105 1460091 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0630 14:06:31.215137 1460091 start.go:495] detecting cgroup driver to use...
	I0630 14:06:31.215213 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0630 14:06:31.257543 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0630 14:06:31.273400 1460091 docker.go:230] disabling cri-docker service (if available) ...
	I0630 14:06:31.273466 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0630 14:06:31.289789 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0630 14:06:31.306138 1460091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0630 14:06:31.453571 1460091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0630 14:06:31.593173 1460091 docker.go:246] disabling docker service ...
	I0630 14:06:31.593260 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0630 14:06:31.610223 1460091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0630 14:06:31.625803 1460091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0630 14:06:31.823510 1460091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0630 14:06:31.974811 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0630 14:06:31.996098 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0630 14:06:32.020154 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0630 14:06:32.033292 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0630 14:06:32.046251 1460091 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0630 14:06:32.046373 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0630 14:06:32.059569 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.072460 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0630 14:06:32.085242 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0630 14:06:32.098259 1460091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0630 14:06:32.111503 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0630 14:06:32.124063 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0630 14:06:32.136348 1460091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0630 14:06:32.148960 1460091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0630 14:06:32.159881 1460091 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0630 14:06:32.159967 1460091 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0630 14:06:32.176065 1460091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0630 14:06:32.188348 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:32.325076 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:32.359838 1460091 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0630 14:06:32.359979 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.366616 1460091 retry.go:31] will retry after 624.469247ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0630 14:06:32.991518 1460091 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0630 14:06:32.997598 1460091 start.go:563] Will wait 60s for crictl version
	I0630 14:06:32.997677 1460091 ssh_runner.go:195] Run: which crictl
	I0630 14:06:33.002325 1460091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0630 14:06:33.045054 1460091 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0630 14:06:33.045186 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.074290 1460091 ssh_runner.go:195] Run: containerd --version
	I0630 14:06:33.134404 1460091 out.go:177] * Preparing Kubernetes v1.33.2 on containerd 1.7.23 ...
	I0630 14:06:33.198052 1460091 main.go:141] libmachine: (addons-412730) Calling .GetIP
	I0630 14:06:33.201668 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202138 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:33.202162 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:33.202486 1460091 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0630 14:06:33.207929 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:33.224479 1460091 kubeadm.go:875] updating cluster {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412
730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0630 14:06:33.224651 1460091 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
	I0630 14:06:33.224723 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:33.262407 1460091 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.2". assuming images are not preloaded.
	I0630 14:06:33.262480 1460091 ssh_runner.go:195] Run: which lz4
	I0630 14:06:33.267241 1460091 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0630 14:06:33.272514 1460091 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0630 14:06:33.272561 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (420558900 bytes)
	I0630 14:06:34.883083 1460091 containerd.go:563] duration metric: took 1.615882395s to copy over tarball
	I0630 14:06:34.883194 1460091 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0630 14:06:36.966670 1460091 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08344467s)
	I0630 14:06:36.966710 1460091 containerd.go:570] duration metric: took 2.083586834s to extract the tarball
	I0630 14:06:36.966722 1460091 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0630 14:06:37.007649 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.150742 1460091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0630 14:06:37.193070 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.245622 1460091 retry.go:31] will retry after 173.895536ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-06-30T14:06:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0630 14:06:37.420139 1460091 ssh_runner.go:195] Run: sudo crictl images --output json
	I0630 14:06:37.464724 1460091 containerd.go:627] all images are preloaded for containerd runtime.
	I0630 14:06:37.464758 1460091 cache_images.go:84] Images are preloaded, skipping loading
	I0630 14:06:37.464771 1460091 kubeadm.go:926] updating node { 192.168.39.114 8443 v1.33.2 containerd true true} ...
	I0630 14:06:37.464919 1460091 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-412730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.2 ClusterName:addons-412730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0630 14:06:37.465002 1460091 ssh_runner.go:195] Run: sudo crictl info
	I0630 14:06:37.511001 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:37.511034 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:37.511049 1460091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0630 14:06:37.511083 1460091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.33.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-412730 NodeName:addons-412730 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0630 14:06:37.511271 1460091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-412730"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0630 14:06:37.511357 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.2
	I0630 14:06:37.525652 1460091 binaries.go:44] Found k8s binaries, skipping transfer
	I0630 14:06:37.525746 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0630 14:06:37.538805 1460091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0630 14:06:37.562031 1460091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0630 14:06:37.587566 1460091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0630 14:06:37.610218 1460091 ssh_runner.go:195] Run: grep 192.168.39.114	control-plane.minikube.internal$ /etc/hosts
	I0630 14:06:37.615571 1460091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0630 14:06:37.632131 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:37.779642 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:37.816746 1460091 certs.go:68] Setting up /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730 for IP: 192.168.39.114
	I0630 14:06:37.816781 1460091 certs.go:194] generating shared ca certs ...
	I0630 14:06:37.816801 1460091 certs.go:226] acquiring lock for ca certs: {Name:mk0651a034eff71720267efe75974a64ed116095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:37.816978 1460091 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key
	I0630 14:06:38.156994 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt ...
	I0630 14:06:38.157034 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt: {Name:mkd96adf4b8dd000ef155465cd7541cb4dbc54f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157267 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key ...
	I0630 14:06:38.157285 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key: {Name:mk6da24087206aaf4a1c31ab7ae44030109e489f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.157410 1460091 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key
	I0630 14:06:38.393807 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt ...
	I0630 14:06:38.393842 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt: {Name:mk321b6cabce084092be365d32608954916437e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394011 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key ...
	I0630 14:06:38.394022 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key: {Name:mk82210dbfc17828b961241482db840048e12b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:38.394107 1460091 certs.go:256] generating profile certs ...
	I0630 14:06:38.394167 1460091 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key
	I0630 14:06:38.394181 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt with IP's: []
	I0630 14:06:39.030200 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt ...
	I0630 14:06:39.030240 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: {Name:mkc9df953aca8566f0870f2298300ff89b509f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030418 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key ...
	I0630 14:06:39.030431 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.key: {Name:mka533b0514825fa7b24c00fc43d73342f608e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.030498 1460091 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367
	I0630 14:06:39.030521 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114]
	I0630 14:06:39.110277 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 ...
	I0630 14:06:39.110319 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367: {Name:mk48ce6fc18dec0b61c5b66960071aff2a24b262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110478 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 ...
	I0630 14:06:39.110491 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367: {Name:mk75d3bfb9efccf05811ea90591687efdb3f8988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.110564 1460091 certs.go:381] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt
	I0630 14:06:39.110641 1460091 certs.go:385] copying /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key.5344c367 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key
	I0630 14:06:39.110691 1460091 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key
	I0630 14:06:39.110708 1460091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt with IP's: []
	I0630 14:06:39.311094 1460091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt ...
	I0630 14:06:39.311131 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt: {Name:mkc683f67a11502b5bdeac9ab79459fda8dea4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311302 1460091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key ...
	I0630 14:06:39.311315 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key: {Name:mk896db09a1f34404a9d7ba2ae83a6472f785239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:39.311491 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca-key.pem (1679 bytes)
	I0630 14:06:39.311529 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/ca.pem (1078 bytes)
	I0630 14:06:39.311552 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/cert.pem (1123 bytes)
	I0630 14:06:39.311574 1460091 certs.go:484] found cert: /home/jenkins/minikube-integration/20991-1452140/.minikube/certs/key.pem (1675 bytes)
	I0630 14:06:39.312289 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0630 14:06:39.348883 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0630 14:06:39.387215 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0630 14:06:39.418089 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0630 14:06:39.456310 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0630 14:06:39.485942 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0630 14:06:39.518368 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0630 14:06:39.550454 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0630 14:06:39.582512 1460091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0630 14:06:39.617828 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0630 14:06:39.640030 1460091 ssh_runner.go:195] Run: openssl version
	I0630 14:06:39.647364 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0630 14:06:39.660898 1460091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666460 1460091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 30 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.666541 1460091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0630 14:06:39.674132 1460091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0630 14:06:39.687542 1460091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0630 14:06:39.692849 1460091 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0630 14:06:39.692930 1460091 kubeadm.go:392] StartCluster: {Name:addons-412730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 ClusterName:addons-412730
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:06:39.693042 1460091 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0630 14:06:39.693124 1460091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0630 14:06:39.733818 1460091 cri.go:89] found id: ""
	I0630 14:06:39.733920 1460091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0630 14:06:39.748350 1460091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0630 14:06:39.762340 1460091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0630 14:06:39.774501 1460091 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0630 14:06:39.774532 1460091 kubeadm.go:157] found existing configuration files:
	
	I0630 14:06:39.774596 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0630 14:06:39.786405 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0630 14:06:39.786474 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0630 14:06:39.798586 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0630 14:06:39.809858 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0630 14:06:39.809932 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0630 14:06:39.822150 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.833619 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0630 14:06:39.833683 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0630 14:06:39.845682 1460091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0630 14:06:39.856947 1460091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0630 14:06:39.857015 1460091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0630 14:06:39.870036 1460091 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0630 14:06:39.922555 1460091 kubeadm.go:310] [init] Using Kubernetes version: v1.33.2
	I0630 14:06:39.922624 1460091 kubeadm.go:310] [preflight] Running pre-flight checks
	I0630 14:06:40.045815 1460091 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0630 14:06:40.045999 1460091 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0630 14:06:40.046138 1460091 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0630 14:06:40.052549 1460091 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0630 14:06:40.071818 1460091 out.go:235]   - Generating certificates and keys ...
	I0630 14:06:40.071955 1460091 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0630 14:06:40.072042 1460091 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0630 14:06:40.453325 1460091 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0630 14:06:40.505817 1460091 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0630 14:06:41.044548 1460091 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0630 14:06:41.417521 1460091 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0630 14:06:41.739226 1460091 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0630 14:06:41.739421 1460091 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:41.843371 1460091 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0630 14:06:41.843539 1460091 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-412730 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I0630 14:06:42.399109 1460091 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0630 14:06:42.840033 1460091 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0630 14:06:43.009726 1460091 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0630 14:06:43.009824 1460091 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0630 14:06:43.506160 1460091 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0630 14:06:43.698222 1460091 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0630 14:06:43.840816 1460091 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0630 14:06:44.231431 1460091 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0630 14:06:44.461049 1460091 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0630 14:06:44.461356 1460091 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0630 14:06:44.463997 1460091 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0630 14:06:44.465945 1460091 out.go:235]   - Booting up control plane ...
	I0630 14:06:44.466073 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0630 14:06:44.466167 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0630 14:06:44.466289 1460091 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0630 14:06:44.484244 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0630 14:06:44.494126 1460091 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0630 14:06:44.494220 1460091 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0630 14:06:44.678804 1460091 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0630 14:06:44.678979 1460091 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0630 14:06:45.689158 1460091 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.011115741s
	I0630 14:06:45.693304 1460091 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0630 14:06:45.693435 1460091 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.114:8443/livez
	I0630 14:06:45.694157 1460091 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0630 14:06:45.694324 1460091 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0630 14:06:48.529853 1460091 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.836599214s
	I0630 14:06:49.645556 1460091 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.952842655s
	I0630 14:06:51.692654 1460091 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.00153129s
	I0630 14:06:51.707013 1460091 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0630 14:06:51.730537 1460091 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0630 14:06:51.769844 1460091 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0630 14:06:51.770065 1460091 kubeadm.go:310] [mark-control-plane] Marking the node addons-412730 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0630 14:06:51.785586 1460091 kubeadm.go:310] [bootstrap-token] Using token: ggslqu.tjlqizciadnjmkc4
	I0630 14:06:51.787072 1460091 out.go:235]   - Configuring RBAC rules ...
	I0630 14:06:51.787249 1460091 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0630 14:06:51.798527 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0630 14:06:51.808767 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0630 14:06:51.813113 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0630 14:06:51.818246 1460091 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0630 14:06:51.822008 1460091 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0630 14:06:52.099709 1460091 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0630 14:06:52.594117 1460091 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0630 14:06:53.099418 1460091 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0630 14:06:53.100502 1460091 kubeadm.go:310] 
	I0630 14:06:53.100601 1460091 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0630 14:06:53.100613 1460091 kubeadm.go:310] 
	I0630 14:06:53.100755 1460091 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0630 14:06:53.100795 1460091 kubeadm.go:310] 
	I0630 14:06:53.100858 1460091 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0630 14:06:53.100965 1460091 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0630 14:06:53.101053 1460091 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0630 14:06:53.101065 1460091 kubeadm.go:310] 
	I0630 14:06:53.101171 1460091 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0630 14:06:53.101191 1460091 kubeadm.go:310] 
	I0630 14:06:53.101279 1460091 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0630 14:06:53.101291 1460091 kubeadm.go:310] 
	I0630 14:06:53.101389 1460091 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0630 14:06:53.101534 1460091 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0630 14:06:53.101651 1460091 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0630 14:06:53.101664 1460091 kubeadm.go:310] 
	I0630 14:06:53.101782 1460091 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0630 14:06:53.101913 1460091 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0630 14:06:53.101931 1460091 kubeadm.go:310] 
	I0630 14:06:53.102062 1460091 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102204 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 \
	I0630 14:06:53.102237 1460091 kubeadm.go:310] 	--control-plane 
	I0630 14:06:53.102246 1460091 kubeadm.go:310] 
	I0630 14:06:53.102351 1460091 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0630 14:06:53.102362 1460091 kubeadm.go:310] 
	I0630 14:06:53.102448 1460091 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ggslqu.tjlqizciadnjmkc4 \
	I0630 14:06:53.102611 1460091 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:617c09b4db1bc5793f47445d1f5bc6fe956626f21f2861489a8e746dc9df0278 
	I0630 14:06:53.104820 1460091 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0630 14:06:53.104859 1460091 cni.go:84] Creating CNI manager for ""
	I0630 14:06:53.104869 1460091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:06:53.106742 1460091 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0630 14:06:53.108147 1460091 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0630 14:06:53.121105 1460091 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0630 14:06:53.146410 1460091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0630 14:06:53.146477 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.146567 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-412730 minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff minikube.k8s.io/name=addons-412730 minikube.k8s.io/primary=true
	I0630 14:06:53.306096 1460091 ops.go:34] apiserver oom_adj: -16
	I0630 14:06:53.306244 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:53.806580 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.306722 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:54.807256 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.306344 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:55.807179 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.306640 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.807184 1460091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0630 14:06:56.895027 1460091 kubeadm.go:1105] duration metric: took 3.748614141s to wait for elevateKubeSystemPrivileges
	I0630 14:06:56.895079 1460091 kubeadm.go:394] duration metric: took 17.202154504s to StartCluster
	I0630 14:06:56.895108 1460091 settings.go:142] acquiring lock: {Name:mk841f56cd7a9b39ff7ba20d8e74be5d85ec1f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895268 1460091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:56.895670 1460091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20991-1452140/kubeconfig: {Name:mkaf116de3c28eb3dfd9964f3211c065b2db02a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0630 14:06:56.895901 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0630 14:06:56.895932 1460091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0630 14:06:56.895997 1460091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0630 14:06:56.896117 1460091 addons.go:69] Setting yakd=true in profile "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:238] Setting addon yakd=true in "addons-412730"
	I0630 14:06:56.896139 1460091 addons.go:69] Setting ingress=true in profile "addons-412730"
	I0630 14:06:56.896159 1460091 addons.go:238] Setting addon ingress=true in "addons-412730"
	I0630 14:06:56.896176 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896165 1460091 addons.go:69] Setting registry=true in profile "addons-412730"
	I0630 14:06:56.896200 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896203 1460091 addons.go:238] Setting addon registry=true in "addons-412730"
	I0630 14:06:56.896203 1460091 addons.go:69] Setting inspektor-gadget=true in profile "addons-412730"
	I0630 14:06:56.896223 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.896233 1460091 addons.go:238] Setting addon inspektor-gadget=true in "addons-412730"
	I0630 14:06:56.896223 1460091 addons.go:69] Setting metrics-server=true in profile "addons-412730"
	I0630 14:06:56.896245 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896253 1460091 addons.go:238] Setting addon metrics-server=true in "addons-412730"
	I0630 14:06:56.896265 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896276 1460091 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896285 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896287 1460091 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-412730"
	I0630 14:06:56.896305 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896570 1460091 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-412730"
	I0630 14:06:56.896661 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-412730"
	I0630 14:06:56.896723 1460091 addons.go:69] Setting volcano=true in profile "addons-412730"
	I0630 14:06:56.896778 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896785 1460091 addons.go:69] Setting registry-creds=true in profile "addons-412730"
	I0630 14:06:56.896751 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896799 1460091 addons.go:69] Setting volumesnapshots=true in profile "addons-412730"
	I0630 14:06:56.896804 1460091 addons.go:238] Setting addon registry-creds=true in "addons-412730"
	I0630 14:06:56.896811 1460091 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-412730"
	I0630 14:06:56.896816 1460091 addons.go:238] Setting addon volumesnapshots=true in "addons-412730"
	I0630 14:06:56.896825 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896830 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896835 1460091 addons.go:69] Setting cloud-spanner=true in profile "addons-412730"
	I0630 14:06:56.896838 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896836 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896852 1460091 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-412730"
	I0630 14:06:56.896876 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896897 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896918 1460091 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:06:56.896941 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897097 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897165 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897187 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897280 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897295 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.896826 1460091 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-412730"
	I0630 14:06:56.897181 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897361 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896845 1460091 addons.go:238] Setting addon cloud-spanner=true in "addons-412730"
	I0630 14:06:56.897199 1460091 addons.go:69] Setting storage-provisioner=true in profile "addons-412730"
	I0630 14:06:56.897456 1460091 addons.go:238] Setting addon storage-provisioner=true in "addons-412730"
	I0630 14:06:56.897488 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897499 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897606 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.897861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897876 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.897886 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897898 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897978 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896791 1460091 addons.go:238] Setting addon volcano=true in "addons-412730"
	I0630 14:06:56.898062 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896771 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898162 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.896767 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.898520 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897212 1460091 addons.go:69] Setting default-storageclass=true in profile "addons-412730"
	I0630 14:06:56.898795 1460091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-412730"
	I0630 14:06:56.899315 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899389 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897224 1460091 addons.go:69] Setting gcp-auth=true in profile "addons-412730"
	I0630 14:06:56.899644 1460091 mustload.go:65] Loading cluster: addons-412730
	I0630 14:06:56.897241 1460091 addons.go:69] Setting ingress-dns=true in profile "addons-412730"
	I0630 14:06:56.899700 1460091 addons.go:238] Setting addon ingress-dns=true in "addons-412730"
	I0630 14:06:56.899796 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.896785 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.899911 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.897328 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.899604 1460091 out.go:177] * Verifying Kubernetes components...
	I0630 14:06:56.915173 1460091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0630 14:06:56.925317 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0630 14:06:56.933471 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0630 14:06:56.933567 1460091 config.go:182] Loaded profile config "addons-412730": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:06:56.933596 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0630 14:06:56.934049 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934108 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934159 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934204 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.934401 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.934443 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.938799 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0630 14:06:56.939041 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0630 14:06:56.939193 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0630 14:06:56.939457 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0630 14:06:56.939729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940028 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940309 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.940326 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.940413 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.940931 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941099 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.941112 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.941179 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.941232 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941301 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.941738 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.941788 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.942491 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942515 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942624 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.942661 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942683 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.942765 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.942792 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.942805 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943018 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943153 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.943163 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.943215 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.943262 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944142 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944175 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.944193 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944211 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.944294 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.944358 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.945770 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.945856 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.946237 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.946282 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.947082 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.947128 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.948967 1460091 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-412730"
	I0630 14:06:56.949015 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.949453 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.949501 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.962217 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.962296 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.973604 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0630 14:06:56.974149 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.974664 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.974695 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.975099 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.975299 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.975756 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0630 14:06:56.977204 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:56.977635 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.977698 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.977979 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.978793 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.978814 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.979233 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.979861 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.979908 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.983635 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0630 14:06:56.984067 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0630 14:06:56.984613 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.985289 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.985309 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.985797 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.986422 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.986466 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.987326 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0630 14:06:56.987554 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I0630 14:06:56.988111 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.988781 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.988800 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.988868 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0630 14:06:56.989272 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989514 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.989982 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990005 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990076 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990136 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.990167 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.990395 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.990688 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.990745 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.991420 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.992366 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:56.992419 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:56.992669 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I0630 14:06:56.993907 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:56.995228 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:56.995248 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:56.995880 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:56.997265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:56.999293 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0630 14:06:56.999370 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.001508 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0630 14:06:57.002883 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0630 14:06:57.002916 1460091 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0630 14:06:57.002942 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.003610 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0630 14:06:57.005195 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0630 14:06:57.005935 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.005991 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.006255 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0630 14:06:57.006289 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006456 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.006802 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.007205 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0630 14:06:57.007321 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0630 14:06:57.007438 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007452 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007601 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007616 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.007742 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.007767 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.008050 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008112 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.008285 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.008675 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.008703 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.008723 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.008787 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.008808 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.009263 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009378 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.009421 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.009781 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.010031 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.010108 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.010355 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010373 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010513 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.010533 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.010629 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.010969 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.010977 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.011283 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.011304 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.011392 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.011650 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.011783 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.011867 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012379 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.012423 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.012599 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.012859 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.012877 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.013047 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.013778 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.014215 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.014495 1460091 addons.go:238] Setting addon default-storageclass=true in "addons-412730"
	I0630 14:06:57.014541 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:06:57.014778 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.014972 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.015012 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.015647 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017091 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.017305 1460091 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.41.0
	I0630 14:06:57.017315 1460091 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0630 14:06:57.019235 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0630 14:06:57.019245 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0630 14:06:57.019258 1460091 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0630 14:06:57.019263 1460091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019284 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.019356 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0630 14:06:57.020515 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45803
	I0630 14:06:57.020579 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:57.020596 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0630 14:06:57.020635 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.021372 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.021977 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.022038 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.022485 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.023104 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.023180 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.023405 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.023860 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.023897 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.025612 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.025864 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.025948 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0630 14:06:57.026240 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.026420 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.026868 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.028570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029396 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029420 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029587 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.029699 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.029761 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.029777 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.029959 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030089 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.030322 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.030383 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.030669 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.031123 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.031274 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.031289 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.031683 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.037907 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.038177 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.039744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0630 14:06:57.039978 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0630 14:06:57.040537 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.040729 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.041308 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041328 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041600 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.041615 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.041928 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.042164 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.042315 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.044033 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0630 14:06:57.044725 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.045331 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.045350 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.045878 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.045938 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.046425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0630 14:06:57.047116 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.047396 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.047496 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.048257 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.048279 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.048498 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049312 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.049440 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.049911 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.050622 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0630 14:06:57.050709 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.051429 1460091 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0630 14:06:57.051993 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.053508 1460091 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:57.053531 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0630 14:06:57.053554 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.054413 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42375
	I0630 14:06:57.054437 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:06:57.054478 1460091 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.35
	I0630 14:06:57.054413 1460091 out.go:177]   - Using image docker.io/registry:3.0.0
	I0630 14:06:57.054933 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.055768 1460091 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:57.055790 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0630 14:06:57.055812 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.055852 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.055876 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.056303 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.056581 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0630 14:06:57.056594 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.056599 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0630 14:06:57.056622 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.057388 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
	I0630 14:06:57.058752 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:57.058770 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0630 14:06:57.058788 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.059503 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060288 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.060307 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.060551 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.060762 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.060918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.060980 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.061036 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.061516 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0630 14:06:57.062190 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.062207 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062733 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.062771 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.062855 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.062894 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.062999 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.063152 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.063283 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.063407 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.063631 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.12.1
	I0630 14:06:57.063848 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.063854 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0630 14:06:57.063891 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0630 14:06:57.064349 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.064387 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064484 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064596 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.064660 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.064704 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.064881 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.064942 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.065098 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.065315 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.065331 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.065402 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.065624 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.066156 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.066196 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.066203 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.12.1
	I0630 14:06:57.066852 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.066874 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.066915 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0630 14:06:57.067252 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.067449 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.067944 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.068048 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.068097 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.068228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.068613 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.068623 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.068822 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.068891 1460091 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.12.1
	I0630 14:06:57.069115 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.069121 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069356 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.069425 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0630 14:06:57.069576 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.070270 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.070286 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.070342 1460091 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0630 14:06:57.071005 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.071129 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.071152 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.071943 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.071951 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0630 14:06:57.071970 1460091 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0630 14:06:57.071992 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.072108 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.072154 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.072685 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0630 14:06:57.072774 1460091 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.072798 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (498069 bytes)
	I0630 14:06:57.072818 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.073341 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.074059 1460091 out.go:177]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0630 14:06:57.074063 1460091 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:57.074155 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0630 14:06:57.074179 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.075067 1460091 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.2
	I0630 14:06:57.075229 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:57.075246 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0630 14:06:57.075572 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.076243 1460091 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.076303 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0630 14:06:57.076329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.078812 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43631
	I0630 14:06:57.079025 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.079130 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.079652 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.080327 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.080351 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.080481 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.080507 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.080634 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.080858 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.081036 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.081055 1460091 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0630 14:06:57.081228 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.081763 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.082138 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082262 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.082706 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:06:57.082752 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:06:57.083020 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083040 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083087 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083100 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083265 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.083494 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.083497 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.083593 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.083780 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.083786 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.083977 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.084112 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.084235 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.084469 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084506 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.084520 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.084738 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.084918 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085065 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085095 1460091 out.go:177]   - Using image docker.io/busybox:stable
	I0630 14:06:57.085067 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.085223 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.085318 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.085373 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.085526 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.085673 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.085865 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.086430 1460091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:57.086442 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0630 14:06:57.086455 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.087486 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0630 14:06:57.087965 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.088516 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.088545 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.089121 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.089329 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.089866 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090528 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.090554 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.090740 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.090964 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.091072 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.091131 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.091254 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.092992 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0630 14:06:57.094599 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0630 14:06:57.095998 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0630 14:06:57.097039 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0630 14:06:57.098265 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0630 14:06:57.099547 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0630 14:06:57.100645 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0630 14:06:57.101875 1460091 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0630 14:06:57.103299 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0630 14:06:57.103321 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0630 14:06:57.103347 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.107000 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0630 14:06:57.107083 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107594 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:06:57.107627 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.107650 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.107840 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.108051 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.108244 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.108441 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:06:57.108455 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:06:57.108453 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:06:57.108913 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:06:57.109191 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:06:57.111002 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:06:57.111252 1460091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:57.111268 1460091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0630 14:06:57.111288 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:06:57.114635 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115172 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:06:57.115248 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:06:57.115422 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:06:57.115624 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:06:57.115796 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:06:57.115964 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	W0630 14:06:57.363795 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.363842 1460091 retry.go:31] will retry after 315.136796ms: ssh: handshake failed: read tcp 192.168.39.1:36374->192.168.39.114:22: read: connection reset by peer
	W0630 14:06:57.364018 1460091 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.364049 1460091 retry.go:31] will retry after 155.525336ms: ssh: handshake failed: read tcp 192.168.39.1:36380->192.168.39.114:22: read: connection reset by peer
	I0630 14:06:57.701875 1460091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0630 14:06:57.701976 1460091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0630 14:06:57.837038 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0630 14:06:57.837063 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0630 14:06:57.838628 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0630 14:06:57.843008 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0630 14:06:57.843041 1460091 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0630 14:06:57.872159 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0630 14:06:57.909976 1460091 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:57.910010 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14737 bytes)
	I0630 14:06:57.932688 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0630 14:06:57.932733 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0630 14:06:57.995639 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0630 14:06:58.066461 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0630 14:06:58.080857 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0630 14:06:58.080899 1460091 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0630 14:06:58.095890 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0630 14:06:58.137462 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0630 14:06:58.206306 1460091 node_ready.go:35] waiting up to 6m0s for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209015 1460091 node_ready.go:49] node "addons-412730" is "Ready"
	I0630 14:06:58.209060 1460091 node_ready.go:38] duration metric: took 2.705097ms for node "addons-412730" to be "Ready" ...
	I0630 14:06:58.209080 1460091 api_server.go:52] waiting for apiserver process to appear ...
	I0630 14:06:58.209140 1460091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:06:58.223118 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0630 14:06:58.377311 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0630 14:06:58.393265 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0630 14:06:58.552870 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0630 14:06:58.629965 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0630 14:06:58.630008 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0630 14:06:58.758806 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0630 14:06:58.758842 1460091 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0630 14:06:58.850972 1460091 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:58.851001 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0630 14:06:59.026553 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0630 14:06:59.026591 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0630 14:06:59.029024 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0630 14:06:59.029049 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0630 14:06:59.194467 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0630 14:06:59.225323 1460091 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0630 14:06:59.225365 1460091 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0630 14:06:59.275081 1460091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.275114 1460091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0630 14:06:59.277525 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0630 14:06:59.360873 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0630 14:06:59.360922 1460091 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0630 14:06:59.365441 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0630 14:06:59.365473 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0630 14:06:59.479182 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0630 14:06:59.479223 1460091 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0630 14:06:59.632112 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0630 14:06:59.730609 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0630 14:06:59.730651 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0630 14:06:59.924237 1460091 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:06:59.924273 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0630 14:06:59.952744 1460091 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:06:59.952779 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0630 14:07:00.295758 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0630 14:07:00.295801 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0630 14:07:00.609047 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:00.711006 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0630 14:07:01.077427 1460091 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0630 14:07:01.077478 1460091 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0630 14:07:01.488779 1460091 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.786858112s)
	I0630 14:07:01.488824 1460091 start.go:972] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0630 14:07:01.488851 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.650181319s)
	I0630 14:07:01.488917 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.488939 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489367 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489386 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.489398 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:01.489407 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:01.489675 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:01.489692 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:01.519482 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0630 14:07:01.519507 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0630 14:07:01.953943 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0630 14:07:01.953981 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0630 14:07:02.000299 1460091 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-412730" context rescaled to 1 replicas
	I0630 14:07:02.634511 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0630 14:07:02.634547 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0630 14:07:03.286523 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0630 14:07:03.286560 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0630 14:07:03.817225 1460091 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:03.817256 1460091 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0630 14:07:04.096118 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0630 14:07:04.096173 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:04.099962 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100533 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:04.100570 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:04.100887 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:04.101144 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:04.101379 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:04.101559 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:04.500309 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0630 14:07:05.218352 1460091 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0630 14:07:05.643348 1460091 addons.go:238] Setting addon gcp-auth=true in "addons-412730"
	I0630 14:07:05.643433 1460091 host.go:66] Checking if "addons-412730" exists ...
	I0630 14:07:05.643934 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.643986 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.660744 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0630 14:07:05.661458 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.662215 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.662238 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.662683 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.663335 1460091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:07:05.663379 1460091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:07:05.682214 1460091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0630 14:07:05.683058 1460091 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:07:05.683766 1460091 main.go:141] libmachine: Using API Version  1
	I0630 14:07:05.683791 1460091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:07:05.684301 1460091 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:07:05.684542 1460091 main.go:141] libmachine: (addons-412730) Calling .GetState
	I0630 14:07:05.686376 1460091 main.go:141] libmachine: (addons-412730) Calling .DriverName
	I0630 14:07:05.686632 1460091 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0630 14:07:05.686663 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHHostname
	I0630 14:07:05.690202 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.690836 1460091 main.go:141] libmachine: (addons-412730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:59:ff", ip: ""} in network mk-addons-412730: {Iface:virbr1 ExpiryTime:2025-06-30 15:06:22 +0000 UTC Type:0 Mac:52:54:00:ac:59:ff Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:addons-412730 Clientid:01:52:54:00:ac:59:ff}
	I0630 14:07:05.690876 1460091 main.go:141] libmachine: (addons-412730) DBG | domain addons-412730 has defined IP address 192.168.39.114 and MAC address 52:54:00:ac:59:ff in network mk-addons-412730
	I0630 14:07:05.691075 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHPort
	I0630 14:07:05.691278 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHKeyPath
	I0630 14:07:05.691467 1460091 main.go:141] libmachine: (addons-412730) Calling .GetSSHUsername
	I0630 14:07:05.691655 1460091 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/addons-412730/id_rsa Username:docker}
	I0630 14:07:11.565837 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.693634263s)
	I0630 14:07:11.565899 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.565914 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.565980 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.570295044s)
	I0630 14:07:11.566027 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.499537s)
	I0630 14:07:11.566089 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.470173071s)
	I0630 14:07:11.566122 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566098 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566168 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566176 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.42868021s)
	I0630 14:07:11.566202 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566212 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566039 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566229 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566137 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566252 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566260 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566283 1460091 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (13.357116893s)
	I0630 14:07:11.566302 1460091 api_server.go:72] duration metric: took 14.670334608s to wait for apiserver process to appear ...
	I0630 14:07:11.566309 1460091 api_server.go:88] waiting for apiserver healthz status ...
	I0630 14:07:11.566329 1460091 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I0630 14:07:11.566328 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (13.343175575s)
	I0630 14:07:11.566350 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566360 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566359 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (13.189016834s)
	I0630 14:07:11.566380 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566389 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566439 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566447 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566456 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566462 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566686 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566242 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566727 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566737 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566745 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566773 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566782 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566789 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566794 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566839 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.566844 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.173547374s)
	I0630 14:07:11.566862 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.566868 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566871 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566874 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.566753 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567113 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567151 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567170 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567176 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567183 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567190 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567203 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567217 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567249 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.567258 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.567271 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567282 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567309 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567329 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567335 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567250 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567548 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.567578 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.567585 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.567976 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568014 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568021 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.568825 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.568856 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.568865 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.566881 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569293 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.016393005s)
	I0630 14:07:11.569320 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569328 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569412 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.374918327s)
	I0630 14:07:11.569425 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569431 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569478 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.291926439s)
	I0630 14:07:11.569490 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569497 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569593 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.937451446s)
	I0630 14:07:11.569615 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569624 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.569735 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.960641721s)
	W0630 14:07:11.569757 1460091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569775 1460091 retry.go:31] will retry after 330.589533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0630 14:07:11.569820 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.858779326s)
	I0630 14:07:11.569834 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.569841 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570507 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570534 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570540 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570547 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570552 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570841 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.570867 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.570873 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.570879 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.570884 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.570993 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571027 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571032 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571041 1460091 addons.go:479] Verifying addon metrics-server=true in "addons-412730"
	I0630 14:07:11.571778 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.571807 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.571816 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.571823 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.571830 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.571917 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572331 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572343 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572353 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.572362 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.572758 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.572789 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.572797 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.572807 1460091 addons.go:479] Verifying addon ingress=true in "addons-412730"
	I0630 14:07:11.573202 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573214 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573223 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573229 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573243 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573257 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573283 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573302 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573308 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573315 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.573321 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.573502 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573535 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.573568 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573586 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573947 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.573962 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.573971 1460091 addons.go:479] Verifying addon registry=true in "addons-412730"
	I0630 14:07:11.574975 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575195 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.575240 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575258 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.575424 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.575449 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.574703 1460091 out.go:177] * Verifying ingress addon...
	I0630 14:07:11.574951 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.576902 1460091 out.go:177] * Verifying registry addon...
	I0630 14:07:11.577803 1460091 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-412730 service yakd-dashboard -n yakd-dashboard
	
	I0630 14:07:11.578734 1460091 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0630 14:07:11.579547 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0630 14:07:11.618799 1460091 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I0630 14:07:11.642386 1460091 api_server.go:141] control plane version: v1.33.2
	I0630 14:07:11.642428 1460091 api_server.go:131] duration metric: took 76.109211ms to wait for apiserver health ...
	I0630 14:07:11.642442 1460091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0630 14:07:11.648379 1460091 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0630 14:07:11.648411 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:11.648426 1460091 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0630 14:07:11.648448 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:11.787935 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.787961 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.788293 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.788355 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	W0630 14:07:11.788482 1460091 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0630 14:07:11.788776 1460091 system_pods.go:59] 17 kube-system pods found
	I0630 14:07:11.788844 1460091 system_pods.go:61] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.788873 1460091 system_pods.go:61] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.788883 1460091 system_pods.go:61] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.788891 1460091 system_pods.go:61] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.788902 1460091 system_pods.go:61] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.788912 1460091 system_pods.go:61] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.788923 1460091 system_pods.go:61] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.788933 1460091 system_pods.go:61] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.788941 1460091 system_pods.go:61] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.788951 1460091 system_pods.go:61] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.788965 1460091 system_pods.go:61] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.788979 1460091 system_pods.go:61] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.788992 1460091 system_pods.go:61] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.789005 1460091 system_pods.go:61] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.789017 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789029 1460091 system_pods.go:61] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.789036 1460091 system_pods.go:61] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.789049 1460091 system_pods.go:74] duration metric: took 146.59926ms to wait for pod list to return data ...
	I0630 14:07:11.789066 1460091 default_sa.go:34] waiting for default service account to be created ...
	I0630 14:07:11.852937 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:11.852969 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:11.853375 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:11.853431 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:11.853445 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:11.859436 1460091 default_sa.go:45] found service account: "default"
	I0630 14:07:11.859476 1460091 default_sa.go:55] duration metric: took 70.393128ms for default service account to be created ...
	I0630 14:07:11.859487 1460091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0630 14:07:11.900655 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0630 14:07:11.926835 1460091 system_pods.go:86] 18 kube-system pods found
	I0630 14:07:11.926878 1460091 system_pods.go:89] "amd-gpu-device-plugin-jk4pf" [669e6afe-7041-4750-a8b3-b9b16b2c1200] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0630 14:07:11.926886 1460091 system_pods.go:89] "coredns-674b8bbfcf-55nn4" [f9bb36d9-fcc7-40a9-a574-a0c0d4a2e249] Running
	I0630 14:07:11.926914 1460091 system_pods.go:89] "csi-hostpath-attacher-0" [b2871319-8553-4b97-acc6-9fa791a121e7] Pending
	I0630 14:07:11.926919 1460091 system_pods.go:89] "csi-hostpathplugin-z9jlw" [9852b523-2f8d-4c9a-85e8-7ac58ed5eebb] Pending
	I0630 14:07:11.926925 1460091 system_pods.go:89] "etcd-addons-412730" [0d20e35f-0200-4c76-93c7-c5dc73170568] Running
	I0630 14:07:11.926931 1460091 system_pods.go:89] "kube-apiserver-addons-412730" [f635944a-97e7-41a4-93a2-bb7fcee2b33b] Running
	I0630 14:07:11.926940 1460091 system_pods.go:89] "kube-controller-manager-addons-412730" [bc65f29f-9646-460b-bbd6-d7633581c597] Running
	I0630 14:07:11.926949 1460091 system_pods.go:89] "kube-ingress-dns-minikube" [b9186cc8-be28-421d-8259-84f8fa275c24] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0630 14:07:11.926958 1460091 system_pods.go:89] "kube-proxy-mgntr" [b2ebef04-6f35-4cb1-a058-5694a72ff27d] Running
	I0630 14:07:11.926966 1460091 system_pods.go:89] "kube-scheduler-addons-412730" [8cb21dd0-89ca-47fb-99e5-03acd8d6fc0f] Running
	I0630 14:07:11.926977 1460091 system_pods.go:89] "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0630 14:07:11.926990 1460091 system_pods.go:89] "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0630 14:07:11.927011 1460091 system_pods.go:89] "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0630 14:07:11.927030 1460091 system_pods.go:89] "registry-creds-6b69cdcdd5-kxnxr" [5d9d53ec-f97e-4851-9025-f208d9a9e0a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0630 14:07:11.927042 1460091 system_pods.go:89] "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0630 14:07:11.927050 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-pn4tl" [26ebb6e6-2f9c-47b1-a6a2-d0bc2631fc74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927061 1460091 system_pods.go:89] "snapshot-controller-68b874b76f-v6vkl" [3e0abe0b-9975-45f8-ba9b-1b5d010607ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0630 14:07:11.927074 1460091 system_pods.go:89] "storage-provisioner" [c5a4662a-1e04-4f23-bf87-a78f5608f496] Running
	I0630 14:07:11.927089 1460091 system_pods.go:126] duration metric: took 67.593682ms to wait for k8s-apps to be running ...
	I0630 14:07:11.927104 1460091 system_svc.go:44] waiting for kubelet service to be running ....
	I0630 14:07:11.927169 1460091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:07:12.193770 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.193803 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.354834 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.854466413s)
	I0630 14:07:12.354924 1460091 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.668263946s)
	I0630 14:07:12.354926 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355156 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355521 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.355577 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.355605 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:12.355625 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:12.355646 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:12.355981 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:12.356003 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:12.356015 1460091 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-412730"
	I0630 14:07:12.356885 1460091 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
	I0630 14:07:12.357715 1460091 out.go:177] * Verifying csi-hostpath-driver addon...
	I0630 14:07:12.359034 1460091 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0630 14:07:12.359721 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0630 14:07:12.360023 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0630 14:07:12.360041 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0630 14:07:12.406216 1460091 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0630 14:07:12.406263 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:12.559364 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0630 14:07:12.559403 1460091 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0630 14:07:12.584643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:12.585219 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:12.665811 1460091 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.665844 1460091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0630 14:07:12.836140 1460091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0630 14:07:12.865786 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.084231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.084272 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.365331 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:13.585910 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:13.586224 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:13.635029 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.734314641s)
	I0630 14:07:13.635075 1460091 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.707884059s)
	I0630 14:07:13.635092 1460091 system_svc.go:56] duration metric: took 1.707986766s WaitForService to wait for kubelet
	I0630 14:07:13.635101 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635119 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635108 1460091 kubeadm.go:578] duration metric: took 16.739135366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0630 14:07:13.635141 1460091 node_conditions.go:102] verifying NodePressure condition ...
	I0630 14:07:13.635462 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635484 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.635497 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:13.635507 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:13.635808 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:13.635828 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:13.638761 1460091 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0630 14:07:13.638792 1460091 node_conditions.go:123] node cpu capacity is 2
	I0630 14:07:13.638809 1460091 node_conditions.go:105] duration metric: took 3.661934ms to run NodePressure ...
	I0630 14:07:13.638826 1460091 start.go:241] waiting for startup goroutines ...
	I0630 14:07:13.875752 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.024111 1460091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.187911729s)
	I0630 14:07:14.024195 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024227 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024586 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.024683 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.024691 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.024702 1460091 main.go:141] libmachine: Making call to close driver server
	I0630 14:07:14.024712 1460091 main.go:141] libmachine: (addons-412730) Calling .Close
	I0630 14:07:14.024994 1460091 main.go:141] libmachine: Successfully made call to close driver server
	I0630 14:07:14.025013 1460091 main.go:141] libmachine: Making call to close connection to plugin binary
	I0630 14:07:14.025043 1460091 main.go:141] libmachine: (addons-412730) DBG | Closing plugin on server side
	I0630 14:07:14.026382 1460091 addons.go:479] Verifying addon gcp-auth=true in "addons-412730"
	I0630 14:07:14.029054 1460091 out.go:177] * Verifying gcp-auth addon...
	I0630 14:07:14.031483 1460091 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0630 14:07:14.064027 1460091 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0630 14:07:14.064055 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.100781 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.114141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.365832 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:14.534739 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:14.583821 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:14.584016 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:14.864558 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.035462 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.083316 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.083872 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.363154 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:15.536843 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:15.584338 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:15.585465 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:15.864842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.084017 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.084651 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.497202 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:16.537408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:16.584546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:16.587004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:16.863546 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.082833 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.083256 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.367136 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:17.536257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:17.583638 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:17.584977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:17.896589 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.035682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.083625 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.084228 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.363753 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:18.535354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:18.583096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:18.583122 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:18.955635 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.035257 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.083049 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.083420 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.364160 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:19.536108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:19.582458 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:19.583611 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:19.862653 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.034233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.082846 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.083682 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.364310 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:20.535698 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:20.583894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:20.583979 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:20.863445 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.036429 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.084981 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.085104 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.363349 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:21.706174 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:21.707208 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:21.707678 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:21.865772 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.035893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.083199 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.084016 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.364233 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:22.535367 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:22.583354 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:22.583535 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:22.865792 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.035789 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.136995 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.137134 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.363626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:23.535937 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:23.582498 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:23.583466 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:23.864738 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.034476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.083541 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.084048 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.364616 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:24.536239 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:24.583008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:24.583026 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:24.864935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.082940 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.083056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.363774 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:25.534897 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:25.583749 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:25.583954 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:25.863865 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.034706 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.084015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:26.084175 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.363040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:26.536862 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:26.583797 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:26.583943 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.189951 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.190109 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.190223 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.191199 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.366231 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:27.535516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:27.584025 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:27.584989 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:27.864198 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.037431 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.082788 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.083975 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.363252 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:28.535710 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:28.583888 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:28.584004 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:28.864040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.034895 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.082915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.083605 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.363381 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:29.535032 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:29.582676 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:29.583815 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:29.865439 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.036869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.084069 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.084108 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.364800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:30.535912 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:30.583840 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:30.585080 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:30.864767 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.044830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.084386 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.084487 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.364893 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:31.623955 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:31.624096 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:31.625461 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:31.863871 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.035869 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.085127 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.086207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.373662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:32.539255 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:32.587456 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:32.588975 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:32.863384 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.037175 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.083368 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.086594 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:33.535971 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:33.582220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:33.583079 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:33.864086 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.035104 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.087614 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.090507 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.364243 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:34.535472 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:34.582842 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:34.583065 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:34.864351 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.038245 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.083459 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.083968 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.364140 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:35.535203 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:35.583507 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:35.583504 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:35.864421 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.035870 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.082290 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.083322 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.363896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:36.536935 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:36.592002 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0630 14:07:36.592024 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:36.867249 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.035497 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.082561 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.083545 1460091 kapi.go:107] duration metric: took 25.503987228s to wait for kubernetes.io/minikube-addons=registry ...
	I0630 14:07:37.364896 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:37.535915 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:37.582416 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:37.863882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.035195 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.084077 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.363908 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:38.536012 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:38.582871 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:38.865977 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.036008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.083221 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.366301 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:39.537043 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:39.584445 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:39.864115 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.035178 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.082503 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.364953 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:40.539118 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:40.582790 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:40.920318 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.039974 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.140897 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:41.363490 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:41.536671 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:41.584110 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151839 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.151893 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.151941 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.364151 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:42.535860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:42.637454 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:42.869058 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.034755 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.083141 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.365516 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:43.539831 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:43.585574 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:43.867882 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.035437 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.083399 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.364009 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:44.534997 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:44.582616 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:44.865028 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.034987 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.083033 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.363797 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:45.536061 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:45.582192 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:45.863930 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.035610 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.082940 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.363183 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:46.536317 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:46.582800 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:46.863634 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.035461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.082263 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.364204 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:47.537008 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:47.638719 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:47.867382 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.035628 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.082998 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.363676 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:48.535845 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:48.583373 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:48.865933 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.035994 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.082615 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.364741 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:49.763038 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:49.763188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:49.864019 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.034923 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.081789 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:50.363509 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:50.536302 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:50.582756 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.084972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.085222 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.088586 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.365037 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:51.536393 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:51.583205 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:51.863948 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.036793 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.083280 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.363764 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:52.534903 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:52.582225 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:52.863489 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.035662 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.083237 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.363683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:53.535229 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:53.582794 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:53.864519 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.035606 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.083006 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.363649 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:54.534894 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:54.582432 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:54.874053 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.036295 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.138176 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.439408 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:55.536289 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:55.583387 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:55.877077 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.038681 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.088650 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.364716 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:56.537099 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:56.638302 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:56.888274 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.065461 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.082558 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.364271 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:57.537383 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:57.584203 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:57.864829 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.035093 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.082842 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.368712 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:58.536145 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:58.583188 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:58.864081 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.035171 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.082395 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.363881 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:07:59.770427 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:07:59.775289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:07:59.886727 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.036389 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.138257 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.365066 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:00.543394 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:00.587828 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:00.862860 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.045510 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.084722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.370626 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:01.543476 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:01.643717 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:01.863100 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.036395 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.083306 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.364022 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:02.536447 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:02.582849 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:02.863402 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.043769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.084338 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.364984 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:03.537068 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:03.583105 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:03.873833 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.064570 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.165207 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.363705 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:04.534655 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:04.582773 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:04.865214 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.040132 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.082101 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.364071 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:05.535996 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:05.583847 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:05.864830 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.035167 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.082727 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.364040 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:06.536325 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:06.584424 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:06.867769 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0630 14:08:07.035374 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.085873 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:07.363748 1460091 kapi.go:107] duration metric: took 55.004020875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0630 14:08:07.535663 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:07.583300 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.036340 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.083025 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:08.537501 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:08.583289 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.035787 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.083288 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:09.536861 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:09.895410 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.036972 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.103056 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:10.537875 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:10.583172 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.036116 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.082706 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:11.537110 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:11.583096 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.035141 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.083220 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:12.535683 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:12.583269 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.035346 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.085856 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:13.535419 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:13.584214 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.035523 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.086182 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:14.538450 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:14.584164 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.035469 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.082710 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:15.535978 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:15.584976 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.035643 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.083354 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:16.536216 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:16.582722 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.036015 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.082827 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:17.535105 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:17.582197 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.036044 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.082594 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:18.535731 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:18.636867 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.040011 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.084634 1460091 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0630 14:08:19.538800 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:19.584691 1460091 kapi.go:107] duration metric: took 1m8.005950872s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0630 14:08:20.046904 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:20.544735 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.045744 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:21.545748 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.039630 1460091 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0630 14:08:22.538370 1460091 kapi.go:107] duration metric: took 1m8.506886725s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0630 14:08:22.539980 1460091 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-412730 cluster.
	I0630 14:08:22.541245 1460091 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0630 14:08:22.542490 1460091 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0630 14:08:22.544085 1460091 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, volcano, inspektor-gadget, registry-creds, cloud-spanner, metrics-server, ingress-dns, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0630 14:08:22.545451 1460091 addons.go:514] duration metric: took 1m25.649456906s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin volcano inspektor-gadget registry-creds cloud-spanner metrics-server ingress-dns storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0630 14:08:22.545505 1460091 start.go:246] waiting for cluster config update ...
	I0630 14:08:22.545527 1460091 start.go:255] writing updated cluster config ...
	I0630 14:08:22.545830 1460091 ssh_runner.go:195] Run: rm -f paused
	I0630 14:08:22.552874 1460091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:22.645593 1460091 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.650587 1460091 pod_ready.go:94] pod "coredns-674b8bbfcf-55nn4" is "Ready"
	I0630 14:08:22.650616 1460091 pod_ready.go:86] duration metric: took 4.992795ms for pod "coredns-674b8bbfcf-55nn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.653714 1460091 pod_ready.go:83] waiting for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.658042 1460091 pod_ready.go:94] pod "etcd-addons-412730" is "Ready"
	I0630 14:08:22.658066 1460091 pod_ready.go:86] duration metric: took 4.323836ms for pod "etcd-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.660310 1460091 pod_ready.go:83] waiting for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.664410 1460091 pod_ready.go:94] pod "kube-apiserver-addons-412730" is "Ready"
	I0630 14:08:22.664433 1460091 pod_ready.go:86] duration metric: took 4.099276ms for pod "kube-apiserver-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.666354 1460091 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:22.958219 1460091 pod_ready.go:94] pod "kube-controller-manager-addons-412730" is "Ready"
	I0630 14:08:22.958253 1460091 pod_ready.go:86] duration metric: took 291.880924ms for pod "kube-controller-manager-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.158459 1460091 pod_ready.go:83] waiting for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.557555 1460091 pod_ready.go:94] pod "kube-proxy-mgntr" is "Ready"
	I0630 14:08:23.557587 1460091 pod_ready.go:86] duration metric: took 399.092549ms for pod "kube-proxy-mgntr" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:23.758293 1460091 pod_ready.go:83] waiting for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157033 1460091 pod_ready.go:94] pod "kube-scheduler-addons-412730" is "Ready"
	I0630 14:08:24.157070 1460091 pod_ready.go:86] duration metric: took 398.746217ms for pod "kube-scheduler-addons-412730" in "kube-system" namespace to be "Ready" or be gone ...
	I0630 14:08:24.157088 1460091 pod_ready.go:40] duration metric: took 1.604151264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0630 14:08:24.206500 1460091 start.go:607] kubectl: 1.33.2, cluster: 1.33.2 (minor skew: 0)
	I0630 14:08:24.208969 1460091 out.go:177] * Done! kubectl is now configured to use "addons-412730" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	46e9c486237cc       56cc512116c8f       5 minutes ago       Running             busybox                                  0                   5b8f43d306a71       busybox
	a41e1f5d78ba3       158e2f2d90f21       12 minutes ago      Running             controller                               0                   ad79beda1cd96       ingress-nginx-controller-67687b59dd-vvcrv
	0383a04db64b6       738351fd438f0       12 minutes ago      Running             csi-snapshotter                          0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	b2d34cd3b4b5f       931dbfd16f87c       12 minutes ago      Running             csi-provisioner                          0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	7083636dce9aa       e899260153aed       12 minutes ago      Running             liveness-probe                           0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	bfebc08e181a7       e255e073c508c       12 minutes ago      Running             hostpath                                 0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	49bfc828f9828       88ef14a257f42       12 minutes ago      Running             node-driver-registrar                    0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	02d5183cb541e       19a639eda60f0       12 minutes ago      Running             csi-resizer                              0                   1b37be17df7f2       csi-hostpath-resizer-0
	40b28663fd84f       a1ed5895ba635       12 minutes ago      Running             csi-external-health-monitor-controller   0                   b4fec9a2b5ea5       csi-hostpathplugin-z9jlw
	b66ddaac6e88a       59cbb42146a37       12 minutes ago      Running             csi-attacher                             0                   6f9489fdc4235       csi-hostpath-attacher-0
	2c3efa502f6ac       0ea86a0862033       12 minutes ago      Exited              patch                                    0                   479724e3cf758       ingress-nginx-admission-patch-fl6cb
	dca6ca157e955       aa61ee9c70bc4       12 minutes ago      Running             volume-snapshot-controller               0                   82ccf34d900ac       snapshot-controller-68b874b76f-v6vkl
	8ff6da260516f       0ea86a0862033       12 minutes ago      Exited              create                                   0                   104d25c1177d7       ingress-nginx-admission-create-gpszb
	b61ad9d665eb6       aa61ee9c70bc4       12 minutes ago      Running             volume-snapshot-controller               0                   9aa1ac650c210       snapshot-controller-68b874b76f-pn4tl
	9d1dce2bd3c5f       e16d1e3a10667       12 minutes ago      Running             local-path-provisioner                   0                   115dda0086b6d       local-path-provisioner-76f89f99b5-rnqpb
	2618e4dc11783       30dd67412fdea       12 minutes ago      Running             minikube-ingress-dns                     0                   0fd95f2b44624       kube-ingress-dns-minikube
	811184505fb18       d5e667c0f2bb6       13 minutes ago      Running             amd-gpu-device-plugin                    0                   b44acdeabc7e9       amd-gpu-device-plugin-jk4pf
	60e507365f1d3       6e38f40d628db       13 minutes ago      Running             storage-provisioner                      0                   c81c97cad8c5e       storage-provisioner
	8e1e019f61b20       1cf5f116067c6       13 minutes ago      Running             coredns                                  0                   f0e3a5c4dc1ba       coredns-674b8bbfcf-55nn4
	e9d272ef95cc8       661d404f36f01       13 minutes ago      Running             kube-proxy                               0                   ec083bc9ceaf6       kube-proxy-mgntr
	cda40c61e5780       cfed1ff748928       13 minutes ago      Running             kube-scheduler                           0                   8b62447a9ffbc       kube-scheduler-addons-412730
	0f5bd8617276d       ee794efa53d85       13 minutes ago      Running             kube-apiserver                           0                   296d470d26007       kube-apiserver-addons-412730
	ed722ba732c02       ff4f56c76b82d       13 minutes ago      Running             kube-controller-manager                  0                   6de0b1c4abb94       kube-controller-manager-addons-412730
	0aa8fdef51063       499038711c081       13 minutes ago      Running             etcd                                     0                   2ea511d5408a9       etcd-addons-412730
	
	
	==> containerd <==
	Jun 30 14:19:38 addons-412730 containerd[860]: time="2025-06-30T14:19:38.425892345Z" level=info msg="StopPodSandbox for \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\""
	Jun 30 14:19:38 addons-412730 containerd[860]: time="2025-06-30T14:19:38.491060719Z" level=info msg="shim disconnected" id=8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18 namespace=k8s.io
	Jun 30 14:19:38 addons-412730 containerd[860]: time="2025-06-30T14:19:38.491108412Z" level=warning msg="cleaning up after shim disconnected" id=8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18 namespace=k8s.io
	Jun 30 14:19:38 addons-412730 containerd[860]: time="2025-06-30T14:19:38.491124668Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jun 30 14:19:38 addons-412730 containerd[860]: time="2025-06-30T14:19:38.597727110Z" level=info msg="TearDown network for sandbox \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\" successfully"
	Jun 30 14:19:38 addons-412730 containerd[860]: time="2025-06-30T14:19:38.597957760Z" level=info msg="StopPodSandbox for \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\" returns successfully"
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.018747081Z" level=info msg="StopPodSandbox for \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\""
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.050839009Z" level=info msg="TearDown network for sandbox \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\" successfully"
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.050887392Z" level=info msg="StopPodSandbox for \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\" returns successfully"
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.051768320Z" level=info msg="RemovePodSandbox for \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\""
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.051907218Z" level=info msg="Forcibly stopping sandbox \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\""
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.080886890Z" level=info msg="TearDown network for sandbox \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\" successfully"
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.089098118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jun 30 14:19:54 addons-412730 containerd[860]: time="2025-06-30T14:19:54.089407762Z" level=info msg="RemovePodSandbox \"8ae5eae7446641d6c9f6b2caa1332af693492842ca1d7e65969c942bc61b7c18\" returns successfully"
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.752308722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76,Uid:2df93f77-f330-4c94-9458-069c8cba79a5,Namespace:local-path-storage,Attempt:0,}"
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.904086998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.904224652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.904235898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.904675835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.990769729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76,Uid:2df93f77-f330-4c94-9458-069c8cba79a5,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"ae20e9cc5d702cca611c6d794412460e8dc6f4dc7453ff5059d03566bf754215\""
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.992893937Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Jun 30 14:20:08 addons-412730 containerd[860]: time="2025-06-30T14:20:08.996840337Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:20:09 addons-412730 containerd[860]: time="2025-06-30T14:20:09.083226206Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:20:09 addons-412730 containerd[860]: time="2025-06-30T14:20:09.202537105Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:20:09 addons-412730 containerd[860]: time="2025-06-30T14:20:09.202604906Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10979"
	
	
	==> coredns [8e1e019f61b2004e8815ddbaf9eb6f733467fc8a79bd77196bc0c76b85b8b99c] <==
	[INFO] 10.244.0.7:37816 - 48483 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00020548s
	[INFO] 10.244.0.7:37816 - 18283 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000160064s
	[INFO] 10.244.0.7:37816 - 57759 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000505163s
	[INFO] 10.244.0.7:37816 - 2367 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000121216s
	[INFO] 10.244.0.7:37816 - 32941 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000407687s
	[INFO] 10.244.0.7:37816 - 38124 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00021235s
	[INFO] 10.244.0.7:37816 - 42370 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000448784s
	[INFO] 10.244.0.7:49788 - 53103 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000191609s
	[INFO] 10.244.0.7:49788 - 52743 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161724s
	[INFO] 10.244.0.7:59007 - 35302 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000389724s
	[INFO] 10.244.0.7:59007 - 35035 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000520532s
	[INFO] 10.244.0.7:46728 - 65447 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133644s
	[INFO] 10.244.0.7:46728 - 65148 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00061652s
	[INFO] 10.244.0.7:50533 - 14727 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000567642s
	[INFO] 10.244.0.7:50533 - 14481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000783618s
	[INFO] 10.244.0.27:51053 - 48711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000523898s
	[INFO] 10.244.0.27:40917 - 60785 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000642215s
	[INFO] 10.244.0.27:35189 - 63805 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096026s
	[INFO] 10.244.0.27:43478 - 6990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00040325s
	[INFO] 10.244.0.27:53994 - 15788 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170635s
	[INFO] 10.244.0.27:51155 - 39553 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128149s
	[INFO] 10.244.0.27:37346 - 35756 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001274741s
	[INFO] 10.244.0.27:38294 - 56651 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000805113s
	[INFO] 10.244.0.31:54260 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000711267s
	[INFO] 10.244.0.31:46467 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124471s
	
	
	==> describe nodes <==
	Name:               addons-412730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-412730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=addons-412730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_06_53_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-412730
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-412730"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:06:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-412730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:20:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:15:22 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:15:22 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:15:22 +0000   Mon, 30 Jun 2025 14:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:15:22 +0000   Mon, 30 Jun 2025 14:06:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    addons-412730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc9448cb8b5448fc9151301fb29bc0cd
	  System UUID:                bc9448cb-8b54-48fc-9151-301fb29bc0cd
	  Boot ID:                    6141a1b2-f9ea-4f8f-bc9e-ef270348f968
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  ingress-nginx               ingress-nginx-controller-67687b59dd-vvcrv                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 amd-gpu-device-plugin-jk4pf                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-674b8bbfcf-55nn4                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-z9jlw                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-412730                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-412730                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-412730                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-mgntr                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-412730                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-68b874b76f-pn4tl                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-68b874b76f-v6vkl                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-76f89f99b5-rnqpb                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-412730 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-412730 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-412730 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-412730 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-412730 event: Registered Node addons-412730 in Controller
	
	
	==> dmesg <==
	[  +7.400861] kauditd_printk_skb: 40 callbacks suppressed
	[  +4.862777] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.721987] kauditd_printk_skb: 3 callbacks suppressed
	[  +3.179109] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.932449] kauditd_printk_skb: 47 callbacks suppressed
	[  +4.007047] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.735579] kauditd_printk_skb: 26 callbacks suppressed
	[Jun30 14:08] kauditd_printk_skb: 76 callbacks suppressed
	[  +4.704545] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.836614] kauditd_printk_skb: 61 callbacks suppressed
	[Jun30 14:09] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:10] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:13] kauditd_printk_skb: 28 callbacks suppressed
	[Jun30 14:14] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 19 callbacks suppressed
	[ +11.983780] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.925929] kauditd_printk_skb: 2 callbacks suppressed
	[Jun30 14:15] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.009854] kauditd_printk_skb: 28 callbacks suppressed
	[  +1.375797] kauditd_printk_skb: 61 callbacks suppressed
	[  +3.058612] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.836555] kauditd_printk_skb: 9 callbacks suppressed
	[Jun30 14:17] kauditd_printk_skb: 1 callbacks suppressed
	[Jun30 14:19] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [0aa8fdef5106381a33bf7fae10904caa793ace481cae1d43127914ffe86d49ff] <==
	{"level":"warn","ts":"2025-06-30T14:07:49.751637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.210142ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:49.751838Z","caller":"traceutil/trace.go:171","msg":"trace[1184992035] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1203; }","duration":"187.410383ms","start":"2025-06-30T14:07:49.564417Z","end":"2025-06-30T14:07:49.751827Z","steps":["trace[1184992035] 'agreement among raft nodes before linearized reading'  (duration: 187.200791ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:49.752758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.403506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:49.751590Z","caller":"traceutil/trace.go:171","msg":"trace[559772973] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"267.154952ms","start":"2025-06-30T14:07:49.483661Z","end":"2025-06-30T14:07:49.750816Z","steps":["trace[559772973] 'process raft request'  (duration: 266.932951ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:07:49.752866Z","caller":"traceutil/trace.go:171","msg":"trace[154741241] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1203; }","duration":"176.571713ms","start":"2025-06-30T14:07:49.576287Z","end":"2025-06-30T14:07:49.752858Z","steps":["trace[154741241] 'agreement among raft nodes before linearized reading'  (duration: 176.438082ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.060101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.201972ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156627244712664246 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/snapshot-controller-68b874b76f-v6vkl.184dd73930f85720\" value_size:707 lease:3156627244712664233 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-06-30T14:07:51.060508Z","caller":"traceutil/trace.go:171","msg":"trace[1403560008] linearizableReadLoop","detail":"{readStateIndex:1246; appliedIndex:1245; }","duration":"269.602891ms","start":"2025-06-30T14:07:50.790891Z","end":"2025-06-30T14:07:51.060494Z","steps":["trace[1403560008] 'read index received'  (duration: 53.900301ms)","trace[1403560008] 'applied index is now lower than readState.Index'  (duration: 215.701517ms)"],"step_count":2}
	{"level":"info","ts":"2025-06-30T14:07:51.060687Z","caller":"traceutil/trace.go:171","msg":"trace[1928328932] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"282.940847ms","start":"2025-06-30T14:07:50.777737Z","end":"2025-06-30T14:07:51.060678Z","steps":["trace[1928328932] 'process raft request'  (duration: 67.101901ms)","trace[1928328932] 'compare'  (duration: 214.876695ms)"],"step_count":2}
	{"level":"warn","ts":"2025-06-30T14:07:51.060917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.674634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.060970Z","caller":"traceutil/trace.go:171","msg":"trace[1908369901] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1214; }","duration":"254.762861ms","start":"2025-06-30T14:07:50.806198Z","end":"2025-06-30T14:07:51.060961Z","steps":["trace[1908369901] 'agreement among raft nodes before linearized reading'  (duration: 254.494296ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.462832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb\" limit:1 ","response":"range_response_count:1 size:4215"}
	{"level":"info","ts":"2025-06-30T14:07:51.061377Z","caller":"traceutil/trace.go:171","msg":"trace[1518962383] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-create-gpszb; range_end:; response_count:1; response_revision:1214; }","duration":"270.575777ms","start":"2025-06-30T14:07:50.790792Z","end":"2025-06-30T14:07:51.061368Z","steps":["trace[1518962383] 'agreement among raft nodes before linearized reading'  (duration: 270.487611ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:51.061955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.960425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:51.062418Z","caller":"traceutil/trace.go:171","msg":"trace[621823114] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1214; }","duration":"205.559852ms","start":"2025-06-30T14:07:50.856769Z","end":"2025-06-30T14:07:51.062329Z","steps":["trace[621823114] 'agreement among raft nodes before linearized reading'  (duration: 204.992694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:55.431218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.529916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:55.431286Z","caller":"traceutil/trace.go:171","msg":"trace[1840291804] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1254; }","duration":"185.638229ms","start":"2025-06-30T14:07:55.245637Z","end":"2025-06-30T14:07:55.431275Z","steps":["trace[1840291804] 'count revisions from in-memory index tree'  (duration: 185.483282ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.760814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.563816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.761810Z","caller":"traceutil/trace.go:171","msg":"trace[1037456471] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"232.616347ms","start":"2025-06-30T14:07:59.529177Z","end":"2025-06-30T14:07:59.761793Z","steps":["trace[1037456471] 'range keys from in-memory index tree'  (duration: 231.18055ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.762324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.982539ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.762383Z","caller":"traceutil/trace.go:171","msg":"trace[856262130] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1289; }","duration":"197.052432ms","start":"2025-06-30T14:07:59.565321Z","end":"2025-06-30T14:07:59.762373Z","steps":["trace[856262130] 'range keys from in-memory index tree'  (duration: 196.924905ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:07:59.767749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.524873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:07:59.767792Z","caller":"traceutil/trace.go:171","msg":"trace[2033650698] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1289; }","duration":"189.645425ms","start":"2025-06-30T14:07:59.578136Z","end":"2025-06-30T14:07:59.767782Z","steps":["trace[2033650698] 'range keys from in-memory index tree'  (duration: 183.005147ms)"],"step_count":1}
	{"level":"info","ts":"2025-06-30T14:16:47.709200Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1900}
	{"level":"info","ts":"2025-06-30T14:16:47.874708Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1900,"took":"164.330155ms","hash":2534900505,"current-db-size-bytes":12238848,"current-db-size":"12 MB","current-db-size-in-use-bytes":7974912,"current-db-size-in-use":"8.0 MB"}
	{"level":"info","ts":"2025-06-30T14:16:47.875273Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2534900505,"revision":1900,"compact-revision":-1}
	
	
	==> kernel <==
	 14:20:23 up 14 min,  0 users,  load average: 0.60, 0.41, 0.41
	Linux addons-412730 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0f5bd8617276d56b4d1c938db3290f5057a6076ca2a1ff6b72007428d9958a0f] <==
	I0630 14:14:29.388938       1 handler.go:288] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0630 14:14:29.869256       1 cacher.go:183] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0630 14:14:30.002718       1 cacher.go:183] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0630 14:14:30.088081       1 cacher.go:183] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0630 14:14:30.129186       1 cacher.go:183] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0630 14:14:30.136170       1 cacher.go:183] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0630 14:14:30.389854       1 cacher.go:183] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0630 14:14:30.736396       1 cacher.go:183] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0630 14:14:48.025746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.114:8443->192.168.39.1:41032: use of closed network connection
	E0630 14:14:48.212301       1 conn.go:339] Error on socket receive: read tcp 192.168.39.114:8443->192.168.39.1:41066: use of closed network connection
	I0630 14:14:51.319634       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:14:57.554271       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.103.203"}
	I0630 14:14:57.570112       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:03.599033       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:08.183782       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:11.441632       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:11.868485       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0630 14:15:12.083379       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.15.45"}
	I0630 14:15:12.087255       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:16.776061       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:19.939310       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0630 14:15:20.985204       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0630 14:15:31.545392       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:15:42.030628       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0630 14:16:49.559945       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ed722ba732c0211e772331fd643a8e48e5ef0b8cd4b82f97d3a5d69b9aa30756] <==
	E0630 14:18:12.946998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:29.433000       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:30.161687       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:31.790280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:33.421727       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:34.360784       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:42.242345       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:46.253763       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:18:55.091239       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:07.429088       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:15.520239       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:16.951164       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:22.291536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:28.405122       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:28.635725       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:30.008801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:33.257901       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:46.062658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:46.090129       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:19:59.503204       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:03.629652       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:03.979647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:04.326396       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:16.048961       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0630 14:20:17.795731       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [e9d272ef95cc8f73e12d5cc59f4966731013d924126fc8eb0bd96e6acc623f27] <==
	E0630 14:06:58.349607       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0630 14:06:58.396678       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E0630 14:06:58.396782       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:06:58.682235       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:06:58.682289       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:06:58.682317       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:06:58.729336       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:06:58.729702       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:06:58.729714       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:06:58.747265       1 config.go:199] "Starting service config controller"
	I0630 14:06:58.747303       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:06:58.747324       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:06:58.747328       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:06:58.747339       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:06:58.747342       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:06:58.747357       1 config.go:329] "Starting node config controller"
	I0630 14:06:58.747360       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:06:58.847644       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0630 14:06:58.847708       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:06:58.847734       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:06:58.848003       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cda40c61e5780477d5a234f04d425f2347a784973443632c68938aea16f474e6] <==
	E0630 14:06:49.633867       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:06:49.633920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:06:49.634247       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:49.636896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:49.637563       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:49.637783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:06:49.638039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:49.638190       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:49.638365       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:06:49.638496       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:06:49.638609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:06:49.638719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:49.638999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.551259       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:06:50.618504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:06:50.628999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 14:06:50.679571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:06:50.702747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:06:50.708224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:06:50.796622       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:06:50.797647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:06:50.806980       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:06:50.808489       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:06:50.967143       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0630 14:06:53.415169       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.735044    1571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vztq\" (UniqueName: \"kubernetes.io/projected/44348f15-724e-4e0a-95a6-7bf671404175-kube-api-access-6vztq\") pod \"44348f15-724e-4e0a-95a6-7bf671404175\" (UID: \"44348f15-724e-4e0a-95a6-7bf671404175\") "
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.735111    1571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/44348f15-724e-4e0a-95a6-7bf671404175-data\") pod \"44348f15-724e-4e0a-95a6-7bf671404175\" (UID: \"44348f15-724e-4e0a-95a6-7bf671404175\") "
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.735149    1571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/44348f15-724e-4e0a-95a6-7bf671404175-script\") pod \"44348f15-724e-4e0a-95a6-7bf671404175\" (UID: \"44348f15-724e-4e0a-95a6-7bf671404175\") "
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.735698    1571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44348f15-724e-4e0a-95a6-7bf671404175-script" (OuterVolumeSpecName: "script") pod "44348f15-724e-4e0a-95a6-7bf671404175" (UID: "44348f15-724e-4e0a-95a6-7bf671404175"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.735786    1571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44348f15-724e-4e0a-95a6-7bf671404175-data" (OuterVolumeSpecName: "data") pod "44348f15-724e-4e0a-95a6-7bf671404175" (UID: "44348f15-724e-4e0a-95a6-7bf671404175"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.738026    1571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44348f15-724e-4e0a-95a6-7bf671404175-kube-api-access-6vztq" (OuterVolumeSpecName: "kube-api-access-6vztq") pod "44348f15-724e-4e0a-95a6-7bf671404175" (UID: "44348f15-724e-4e0a-95a6-7bf671404175"). InnerVolumeSpecName "kube-api-access-6vztq". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.835594    1571 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/44348f15-724e-4e0a-95a6-7bf671404175-script\") on node \"addons-412730\" DevicePath \"\""
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.835746    1571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vztq\" (UniqueName: \"kubernetes.io/projected/44348f15-724e-4e0a-95a6-7bf671404175-kube-api-access-6vztq\") on node \"addons-412730\" DevicePath \"\""
	Jun 30 14:19:38 addons-412730 kubelet[1571]: I0630 14:19:38.835803    1571 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/44348f15-724e-4e0a-95a6-7bf671404175-data\") on node \"addons-412730\" DevicePath \"\""
	Jun 30 14:19:39 addons-412730 kubelet[1571]: E0630 14:19:39.448188    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:19:40 addons-412730 kubelet[1571]: I0630 14:19:40.445734    1571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44348f15-724e-4e0a-95a6-7bf671404175" path="/var/lib/kubelet/pods/44348f15-724e-4e0a-95a6-7bf671404175/volumes"
	Jun 30 14:19:48 addons-412730 kubelet[1571]: E0630 14:19:48.443248    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:19:50 addons-412730 kubelet[1571]: E0630 14:19:50.444318    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:20:01 addons-412730 kubelet[1571]: E0630 14:20:01.443101    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:20:02 addons-412730 kubelet[1571]: E0630 14:20:02.446061    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	Jun 30 14:20:08 addons-412730 kubelet[1571]: I0630 14:20:08.507185    1571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2df93f77-f330-4c94-9458-069c8cba79a5-data\") pod \"helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76\" (UID: \"2df93f77-f330-4c94-9458-069c8cba79a5\") " pod="local-path-storage/helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76"
	Jun 30 14:20:08 addons-412730 kubelet[1571]: I0630 14:20:08.507396    1571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2df93f77-f330-4c94-9458-069c8cba79a5-script\") pod \"helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76\" (UID: \"2df93f77-f330-4c94-9458-069c8cba79a5\") " pod="local-path-storage/helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76"
	Jun 30 14:20:08 addons-412730 kubelet[1571]: I0630 14:20:08.507555    1571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vrt\" (UniqueName: \"kubernetes.io/projected/2df93f77-f330-4c94-9458-069c8cba79a5-kube-api-access-n7vrt\") pod \"helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76\" (UID: \"2df93f77-f330-4c94-9458-069c8cba79a5\") " pod="local-path-storage/helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76"
	Jun 30 14:20:09 addons-412730 kubelet[1571]: E0630 14:20:09.203077    1571 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Jun 30 14:20:09 addons-412730 kubelet[1571]: E0630 14:20:09.203143    1571 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Jun 30 14:20:09 addons-412730 kubelet[1571]: E0630 14:20:09.203617    1571 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:helper-pod,Image:docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79,Command:[/bin/sh /script/setup],Args:[-p /opt/local-path-provisioner/pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76_default_test-pvc -s 67108864 -m Filesystem],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:VOL_DIR,Value:/opt/local-path-provisioner/pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76_default_test-pvc,ValueFrom:nil,},EnvVar{Name:VOL_MODE,Value:Filesystem,ValueFrom:nil,},EnvVar{Name:VOL_SIZE_BYTES,Value:67108864,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:script,ReadOnly:false,MountPath:/script,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:data,ReadOnly:false,MountPath:/
opt/local-path-provisioner/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7vrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76_local-path-storage(2df93f77-f330-4c94-9458-069c8cba79a5): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/mani
fests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jun 30 14:20:09 addons-412730 kubelet[1571]: E0630 14:20:09.205192    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76" podUID="2df93f77-f330-4c94-9458-069c8cba79a5"
	Jun 30 14:20:09 addons-412730 kubelet[1571]: E0630 14:20:09.594647    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76" podUID="2df93f77-f330-4c94-9458-069c8cba79a5"
	Jun 30 14:20:14 addons-412730 kubelet[1571]: E0630 14:20:14.444369    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="c47e35d5-df9f-4a6a-a3bf-87072a4de2a0"
	Jun 30 14:20:16 addons-412730 kubelet[1571]: E0630 14:20:16.443864    1571 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="64454ac4-31e6-4e37-95db-f9dbfdbc92c3"
	
	
	==> storage-provisioner [60e507365f1d30c7beac2979b93ea374fc72f0bcfb17244185c70d7ea0c4da2b] <==
	W0630 14:19:58.069003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:00.073170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:00.081985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:02.086076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:02.096129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:04.100044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:04.107485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:06.111923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:06.118654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:08.122185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:08.130571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:10.134253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:10.140227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:12.143253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:12.148629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:14.152965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:14.159048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:16.163680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:16.170971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:18.174909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:18.183110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:20.186288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:20.191917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:22.194925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:20:22.200854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412730 -n addons-412730
helpers_test.go:261: (dbg) Run:  kubectl --context addons-412730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76: exit status 1 (116.084499ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-412730/192.168.39.114
	Start Time:       Mon, 30 Jun 2025 14:15:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpjf9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tpjf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m12s                  default-scheduler  Successfully assigned default/nginx to addons-412730
	  Warning  Failed     5m12s                  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6544c26a789f03b1a36e45ce8c77ea71d5d3e8d4e07c49ddceccfe0de47aa3e0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m17s (x5 over 5m12s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m17s (x5 over 5m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m17s (x4 over 4m56s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b2e814d28359e77bd0aa5fed1939620075e4ffa0eb20423cc557b375bd5c14ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x20 over 5m11s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     8s (x20 over 5m11s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-412730/192.168.39.114
	Start Time:       Mon, 30 Jun 2025 14:15:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgbht (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-vgbht:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m18s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-412730
	  Normal   Pulling    2m26s (x5 over 5m18s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m26s (x5 over 5m18s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m26s (x5 over 5m18s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x20 over 5m17s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     10s (x20 over 5m17s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmb4n (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jmb4n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gpszb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fl6cb" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-412730 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-gpszb ingress-nginx-admission-patch-fl6cb helper-pod-create-pvc-b6ef0e32-f34a-4739-8d1c-1ac9a8300d76: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.176406647s)
--- FAIL: TestAddons/parallel/LocalPath (345.78s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-125151 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-125151 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-125151 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-125151 --alsologtostderr -v=1] stderr:
I0630 14:29:14.982729 1472937 out.go:345] Setting OutFile to fd 1 ...
I0630 14:29:14.983119 1472937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:14.983138 1472937 out.go:358] Setting ErrFile to fd 2...
I0630 14:29:14.983145 1472937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:14.983469 1472937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
I0630 14:29:14.983994 1472937 mustload.go:65] Loading cluster: functional-125151
I0630 14:29:14.984556 1472937 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:14.985208 1472937 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:14.985292 1472937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:15.006396 1472937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
I0630 14:29:15.007131 1472937 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:15.007780 1472937 main.go:141] libmachine: Using API Version  1
I0630 14:29:15.007822 1472937 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:15.008341 1472937 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:15.008647 1472937 main.go:141] libmachine: (functional-125151) Calling .GetState
I0630 14:29:15.011152 1472937 host.go:66] Checking if "functional-125151" exists ...
I0630 14:29:15.011671 1472937 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:15.011739 1472937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:15.032309 1472937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
I0630 14:29:15.033044 1472937 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:15.033685 1472937 main.go:141] libmachine: Using API Version  1
I0630 14:29:15.033717 1472937 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:15.034474 1472937 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:15.034929 1472937 main.go:141] libmachine: (functional-125151) Calling .DriverName
I0630 14:29:15.035176 1472937 api_server.go:166] Checking apiserver status ...
I0630 14:29:15.035247 1472937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0630 14:29:15.035299 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHHostname
I0630 14:29:15.038525 1472937 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:15.039005 1472937 main.go:141] libmachine: (functional-125151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:6b", ip: ""} in network mk-functional-125151: {Iface:virbr1 ExpiryTime:2025-06-30 15:26:09 +0000 UTC Type:0 Mac:52:54:00:78:c3:6b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:functional-125151 Clientid:01:52:54:00:78:c3:6b}
I0630 14:29:15.039039 1472937 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:15.039169 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHPort
I0630 14:29:15.039402 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHKeyPath
I0630 14:29:15.039639 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHUsername
I0630 14:29:15.039814 1472937 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/functional-125151/id_rsa Username:docker}
I0630 14:29:15.167484 1472937 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5560/cgroup
W0630 14:29:15.189340 1472937 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5560/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0630 14:29:15.189418 1472937 ssh_runner.go:195] Run: ls
I0630 14:29:15.196803 1472937 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8441/healthz ...
I0630 14:29:15.202489 1472937 api_server.go:279] https://192.168.39.24:8441/healthz returned 200:
ok
W0630 14:29:15.202561 1472937 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0630 14:29:15.202810 1472937 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:15.202846 1472937 addons.go:69] Setting dashboard=true in profile "functional-125151"
I0630 14:29:15.202863 1472937 addons.go:238] Setting addon dashboard=true in "functional-125151"
I0630 14:29:15.202921 1472937 host.go:66] Checking if "functional-125151" exists ...
I0630 14:29:15.203440 1472937 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:15.203505 1472937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:15.220014 1472937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
I0630 14:29:15.220682 1472937 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:15.221273 1472937 main.go:141] libmachine: Using API Version  1
I0630 14:29:15.221300 1472937 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:15.221750 1472937 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:15.222452 1472937 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:15.222534 1472937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:15.240949 1472937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
I0630 14:29:15.241530 1472937 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:15.242073 1472937 main.go:141] libmachine: Using API Version  1
I0630 14:29:15.242097 1472937 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:15.242470 1472937 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:15.242699 1472937 main.go:141] libmachine: (functional-125151) Calling .GetState
I0630 14:29:15.244154 1472937 main.go:141] libmachine: (functional-125151) Calling .DriverName
I0630 14:29:15.246353 1472937 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0630 14:29:15.248168 1472937 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0630 14:29:15.249712 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0630 14:29:15.249737 1472937 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0630 14:29:15.249765 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHHostname
I0630 14:29:15.254040 1472937 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:15.254584 1472937 main.go:141] libmachine: (functional-125151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:6b", ip: ""} in network mk-functional-125151: {Iface:virbr1 ExpiryTime:2025-06-30 15:26:09 +0000 UTC Type:0 Mac:52:54:00:78:c3:6b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:functional-125151 Clientid:01:52:54:00:78:c3:6b}
I0630 14:29:15.254701 1472937 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:15.254807 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHPort
I0630 14:29:15.255138 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHKeyPath
I0630 14:29:15.255325 1472937 main.go:141] libmachine: (functional-125151) Calling .GetSSHUsername
I0630 14:29:15.255536 1472937 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/functional-125151/id_rsa Username:docker}
I0630 14:29:15.384783 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0630 14:29:15.384823 1472937 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0630 14:29:15.419734 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0630 14:29:15.419769 1472937 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0630 14:29:15.448600 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0630 14:29:15.448640 1472937 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0630 14:29:15.473265 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0630 14:29:15.473298 1472937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0630 14:29:15.499241 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0630 14:29:15.499277 1472937 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0630 14:29:15.525484 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0630 14:29:15.525534 1472937 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0630 14:29:15.552588 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0630 14:29:15.552628 1472937 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0630 14:29:15.576750 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0630 14:29:15.576795 1472937 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0630 14:29:15.605425 1472937 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0630 14:29:15.605459 1472937 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0630 14:29:15.627202 1472937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0630 14:29:17.023749 1472937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.396492714s)
I0630 14:29:17.023828 1472937 main.go:141] libmachine: Making call to close driver server
I0630 14:29:17.023846 1472937 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:17.024179 1472937 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:17.024199 1472937 main.go:141] libmachine: (functional-125151) DBG | Closing plugin on server side
I0630 14:29:17.024207 1472937 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:17.024228 1472937 main.go:141] libmachine: Making call to close driver server
I0630 14:29:17.024238 1472937 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:17.024503 1472937 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:17.024522 1472937 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:17.026389 1472937 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-125151 addons enable metrics-server

                                                
                                                
I0630 14:29:17.027607 1472937 addons.go:201] Writing out "functional-125151" config to set dashboard=true...
W0630 14:29:17.027936 1472937 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0630 14:29:17.028919 1472937 kapi.go:59] client config for functional-125151: &rest.Config{Host:"https://192.168.39.24:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt", KeyFile:"/home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.key", CAFile:"/home/jenkins/minikube-integration/20991-1452140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x258ff00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0630 14:29:17.029584 1472937 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0630 14:29:17.029611 1472937 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0630 14:29:17.029621 1472937 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0630 14:29:17.029629 1472937 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0630 14:29:17.029640 1472937 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0630 14:29:17.062652 1472937 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  828d9f89-971b-4951-9cb7-c33a0356cfb9 860 0 2025-06-30 14:29:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-06-30 14:29:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.8.51,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.8.51],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0630 14:29:17.062881 1472937 out.go:270] * Launching proxy ...
* Launching proxy ...
I0630 14:29:17.063012 1472937 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-125151 proxy --port 36195]
I0630 14:29:17.063403 1472937 dashboard.go:157] Waiting for kubectl to output host:port ...
I0630 14:29:17.123598 1472937 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0630 14:29:17.123646 1472937 out.go:270] * Verifying proxy health ...
* Verifying proxy health ...
I0630 14:29:17.132713 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df9d3245-b026-40f8-b456-25c742033158] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc00070b640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497b80 TLS:<nil>}
I0630 14:29:17.132845 1472937 retry.go:31] will retry after 102.612µs: Temporary Error: unexpected response code: 503
I0630 14:29:17.137782 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11931f5b-684e-41ef-977a-422ab9b805ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc000857340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013cdc0 TLS:<nil>}
I0630 14:29:17.137868 1472937 retry.go:31] will retry after 111.144µs: Temporary Error: unexpected response code: 503
I0630 14:29:17.143810 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11f1fa25-70d4-49ed-ae93-d57555cd7d22] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc000857400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497cc0 TLS:<nil>}
I0630 14:29:17.143891 1472937 retry.go:31] will retry after 260.787µs: Temporary Error: unexpected response code: 503
I0630 14:29:17.150150 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[065294dc-7849-4983-abc9-7333884d4b2a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc00070b780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497e00 TLS:<nil>}
I0630 14:29:17.150233 1472937 retry.go:31] will retry after 327.701µs: Temporary Error: unexpected response code: 503
I0630 14:29:17.155703 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2234289f-b3c7-4076-951c-46eb380ef101] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc000857500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013d040 TLS:<nil>}
I0630 14:29:17.155793 1472937 retry.go:31] will retry after 756.925µs: Temporary Error: unexpected response code: 503
I0630 14:29:17.160091 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bd176611-1fa7-4055-ba0b-6a1bb61516cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc00070b8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290000 TLS:<nil>}
I0630 14:29:17.160174 1472937 retry.go:31] will retry after 1.136425ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.172665 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[026a05ee-62fa-4da0-a4bb-b7c186b3e0a7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0008b4fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013d180 TLS:<nil>}
I0630 14:29:17.172761 1472937 retry.go:31] will retry after 1.59866ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.179812 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1c86716a-8f1a-490b-a3e1-bc717b95f01c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc00070b9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac000 TLS:<nil>}
I0630 14:29:17.179884 1472937 retry.go:31] will retry after 1.435205ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.186239 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a29e470e-bc80-4da8-a78f-3991d8025dfa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc000857600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013d2c0 TLS:<nil>}
I0630 14:29:17.186304 1472937 retry.go:31] will retry after 2.846788ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.196726 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[efeda700-efa4-4a8d-bdf0-9a1fd896da87] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0009b2040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290140 TLS:<nil>}
I0630 14:29:17.196813 1472937 retry.go:31] will retry after 5.253174ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.211531 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[85ff3ecf-cc49-45e0-9aa4-755208f29d6a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc000857700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013d400 TLS:<nil>}
I0630 14:29:17.211611 1472937 retry.go:31] will retry after 6.081426ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.235555 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35112e00-c9c6-4aa5-8cb8-5452bf1c2da2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0008b5440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290280 TLS:<nil>}
I0630 14:29:17.235638 1472937 retry.go:31] will retry after 9.937058ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.262883 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[724f5b4b-039f-4397-891b-c1d4901c8db5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0009b21c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac140 TLS:<nil>}
I0630 14:29:17.262985 1472937 retry.go:31] will retry after 10.493898ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.278328 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[122020a9-dc32-4b03-9d69-0d05e5caa4c9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0008577c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013db80 TLS:<nil>}
I0630 14:29:17.278426 1472937 retry.go:31] will retry after 15.253302ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.301078 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2991946d-7bf5-46ea-80ab-37c79f1d3de1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0008b5540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002903c0 TLS:<nil>}
I0630 14:29:17.301166 1472937 retry.go:31] will retry after 37.35525ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.345992 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82a0d272-b5a0-49e5-a0c5-b35dab0ce64a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0008578c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac280 TLS:<nil>}
I0630 14:29:17.346081 1472937 retry.go:31] will retry after 50.750686ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.403654 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bdea34d8-791e-48a4-8a2a-88a78120004b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc000857980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290500 TLS:<nil>}
I0630 14:29:17.403738 1472937 retry.go:31] will retry after 57.30924ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.466435 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[704e4b44-8321-46f0-bd40-e0628a2b6bac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc000857ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290640 TLS:<nil>}
I0630 14:29:17.466534 1472937 retry.go:31] will retry after 65.084137ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.535993 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d73b2ed5-7e9d-4643-af7c-528144a864c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0008b56c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290780 TLS:<nil>}
I0630 14:29:17.536060 1472937 retry.go:31] will retry after 157.764739ms: Temporary Error: unexpected response code: 503
I0630 14:29:17.697777 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dcc2bd14-4281-4b78-b870-c472655ec44a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:17 GMT]] Body:0xc0009b2380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac3c0 TLS:<nil>}
I0630 14:29:17.697881 1472937 retry.go:31] will retry after 326.928712ms: Temporary Error: unexpected response code: 503
I0630 14:29:18.028416 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a29d442f-e0f1-4a3f-9914-6f991b37d848] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:18 GMT]] Body:0xc0009b2480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001746000 TLS:<nil>}
I0630 14:29:18.028496 1472937 retry.go:31] will retry after 440.567997ms: Temporary Error: unexpected response code: 503
I0630 14:29:18.473990 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c1c925da-b5c7-42ac-909b-e800fe941f97] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:18 GMT]] Body:0xc0017ba080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001746140 TLS:<nil>}
I0630 14:29:18.474096 1472937 retry.go:31] will retry after 271.667355ms: Temporary Error: unexpected response code: 503
I0630 14:29:18.965228 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3d44e303-9959-43b3-8637-77b76e449f2c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:18 GMT]] Body:0xc0009b2580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002908c0 TLS:<nil>}
I0630 14:29:18.965306 1472937 retry.go:31] will retry after 547.254995ms: Temporary Error: unexpected response code: 503
I0630 14:29:19.517448 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b03f096a-925c-42e5-8af8-20328da635e8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:19 GMT]] Body:0xc0017ba180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001746280 TLS:<nil>}
I0630 14:29:19.517541 1472937 retry.go:31] will retry after 979.325551ms: Temporary Error: unexpected response code: 503
I0630 14:29:20.503453 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88534103-a045-46e4-a574-db9af104b722] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:20 GMT]] Body:0xc0008b5800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290a00 TLS:<nil>}
I0630 14:29:20.503543 1472937 retry.go:31] will retry after 2.217259941s: Temporary Error: unexpected response code: 503
I0630 14:29:22.724085 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f84df30-3bd2-4be3-be28-d31aae504fb5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:22 GMT]] Body:0xc0009b26c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac500 TLS:<nil>}
I0630 14:29:22.724157 1472937 retry.go:31] will retry after 3.365287247s: Temporary Error: unexpected response code: 503
I0630 14:29:26.093573 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d24d0a42-d193-44a4-8792-1666df5f182c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:26 GMT]] Body:0xc0017ba280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017463c0 TLS:<nil>}
I0630 14:29:26.093644 1472937 retry.go:31] will retry after 2.695177278s: Temporary Error: unexpected response code: 503
I0630 14:29:28.792475 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[417bac39-6a0a-44cd-9fd9-4853c7352fd8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:28 GMT]] Body:0xc0017ba340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290b40 TLS:<nil>}
I0630 14:29:28.792554 1472937 retry.go:31] will retry after 8.003642541s: Temporary Error: unexpected response code: 503
I0630 14:29:36.800863 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8649e80e-3d42-4bda-b12d-20e1880b3451] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:36 GMT]] Body:0xc0017ba3c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001746500 TLS:<nil>}
I0630 14:29:36.800965 1472937 retry.go:31] will retry after 11.275982s: Temporary Error: unexpected response code: 503
I0630 14:29:48.083147 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[26c1af9a-112b-4200-b348-b52630083529] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:29:48 GMT]] Body:0xc0009b3180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000290c80 TLS:<nil>}
I0630 14:29:48.083230 1472937 retry.go:31] will retry after 17.657317215s: Temporary Error: unexpected response code: 503
I0630 14:30:05.744964 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[efd613f5-0788-49c4-b8b8-5dc8172ac864] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:30:05 GMT]] Body:0xc0017ba4c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac640 TLS:<nil>}
I0630 14:30:05.745046 1472937 retry.go:31] will retry after 26.433261327s: Temporary Error: unexpected response code: 503
I0630 14:30:32.183375 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2413902b-fd42-4b3d-b848-30e2b8a44a73] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:30:32 GMT]] Body:0xc0009b3240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac780 TLS:<nil>}
I0630 14:30:32.183451 1472937 retry.go:31] will retry after 26.091747226s: Temporary Error: unexpected response code: 503
I0630 14:30:58.279511 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0bbacbf4-fa5a-4bc0-803c-ee17f5f10417] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:30:58 GMT]] Body:0xc0017ba540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ac8c0 TLS:<nil>}
I0630 14:30:58.279653 1472937 retry.go:31] will retry after 21.893995932s: Temporary Error: unexpected response code: 503
I0630 14:31:20.178442 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e65ebae5-9b31-40a3-ad30-309a2a6cfe85] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:31:20 GMT]] Body:0xc000252440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001746640 TLS:<nil>}
I0630 14:31:20.178533 1472937 retry.go:31] will retry after 39.888422676s: Temporary Error: unexpected response code: 503
I0630 14:32:00.070792 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c326157f-cea6-4fe1-861c-a4c0bfc6db58] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:32:00 GMT]] Body:0xc0017ba080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000212140 TLS:<nil>}
I0630 14:32:00.070897 1472937 retry.go:31] will retry after 36.104932423s: Temporary Error: unexpected response code: 503
I0630 14:32:36.180435 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c970193-2ac9-4918-9a0f-6522930aeda9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:32:36 GMT]] Body:0xc0009b21c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000212280 TLS:<nil>}
I0630 14:32:36.180542 1472937 retry.go:31] will retry after 1m0.78161754s: Temporary Error: unexpected response code: 503
I0630 14:33:36.967352 1472937 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bd7414a-760f-45b8-872b-39ecd88b7f22] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 30 Jun 2025 14:33:36 GMT]] Body:0xc0017ba080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002123c0 TLS:<nil>}
I0630 14:33:36.967448 1472937 retry.go:31] will retry after 1m25.53333939s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-125151 -n functional-125151
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 logs -n 25: (1.482755382s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-125151 ssh stat                                              | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | /mount-9p/created-by-pod                                                |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh sudo                                              | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | umount -f /mount-9p                                                     |                   |         |         |                     |                     |
	| mount          | -p functional-125151                                                    | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port216868413/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                     |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh findmnt                                           | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                  |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh findmnt                                           | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | -T /mount-9p | grep 9p                                                  |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh -- ls                                             | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | -la /mount-9p                                                           |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh sudo                                              | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | umount -f /mount-9p                                                     |                   |         |         |                     |                     |
	| mount          | -p functional-125151                                                    | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount2  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh findmnt                                           | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | -T /mount1                                                              |                   |         |         |                     |                     |
	| mount          | -p functional-125151                                                    | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount3  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| mount          | -p functional-125151                                                    | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount1  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh findmnt                                           | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | -T /mount1                                                              |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh findmnt                                           | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | -T /mount2                                                              |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh findmnt                                           | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | -T /mount3                                                              |                   |         |         |                     |                     |
	| mount          | -p functional-125151                                                    | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | --kill=true                                                             |                   |         |         |                     |                     |
	| image          | functional-125151                                                       | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-125151                                                       | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-125151 ssh pgrep                                             | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-125151 image build -t                                        | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | localhost/my-image:functional-125151                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-125151 image ls                                              | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	| image          | functional-125151                                                       | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-125151                                                       | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-125151                                                       | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-125151                                                       | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-125151                                                       | functional-125151 | jenkins | v1.36.0 | 30 Jun 25 14:29 UTC | 30 Jun 25 14:29 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:29:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:29:14.815262 1472909 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:29:14.815752 1472909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:29:14.815777 1472909 out.go:358] Setting ErrFile to fd 2...
	I0630 14:29:14.815787 1472909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:29:14.816402 1472909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:29:14.817740 1472909 out.go:352] Setting JSON to false
	I0630 14:29:14.818807 1472909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":51078,"bootTime":1751242677,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:29:14.818935 1472909 start.go:140] virtualization: kvm guest
	I0630 14:29:14.820495 1472909 out.go:177] * [functional-125151] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0630 14:29:14.822313 1472909 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:29:14.822368 1472909 notify.go:220] Checking for updates...
	I0630 14:29:14.824807 1472909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:29:14.826240 1472909 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:29:14.827465 1472909 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:29:14.828804 1472909 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:29:14.830177 1472909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:29:14.831840 1472909 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:29:14.832261 1472909 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:29:14.832329 1472909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:29:14.850988 1472909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I0630 14:29:14.851537 1472909 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:29:14.852287 1472909 main.go:141] libmachine: Using API Version  1
	I0630 14:29:14.852315 1472909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:29:14.852779 1472909 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:29:14.853027 1472909 main.go:141] libmachine: (functional-125151) Calling .DriverName
	I0630 14:29:14.853327 1472909 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:29:14.853646 1472909 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:29:14.853689 1472909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:29:14.870691 1472909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0630 14:29:14.871295 1472909 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:29:14.871932 1472909 main.go:141] libmachine: Using API Version  1
	I0630 14:29:14.871955 1472909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:29:14.872326 1472909 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:29:14.872558 1472909 main.go:141] libmachine: (functional-125151) Calling .DriverName
	I0630 14:29:14.910919 1472909 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0630 14:29:14.913336 1472909 start.go:304] selected driver: kvm2
	I0630 14:29:14.913370 1472909 start.go:908] validating driver "kvm2" against &{Name:functional-125151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-125151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:29:14.913522 1472909 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:29:14.916006 1472909 out.go:201] 
	W0630 14:29:14.917527 1472909 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0630 14:29:14.918986 1472909 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6260918f740e3       9a9a9fd723f1d       4 minutes ago       Running             myfrontend                0                   127366c5c2c12       sp-pod
	a1fe7d267102e       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   8f59758cf8874       busybox-mount
	cc437c6dc9d3b       82e4c8a736a4f       5 minutes ago       Running             echoserver                0                   980cdf338d93b       hello-node-connect-58f9cf68d8-j5qrg
	43710066d4213       82e4c8a736a4f       5 minutes ago       Running             echoserver                0                   61f2098ae5a12       hello-node-fcfd88b6f-ndkbs
	0dc4d97988fb9       6e38f40d628db       5 minutes ago       Running             storage-provisioner       4                   baa039e792cde       storage-provisioner
	9f17957351931       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       3                   baa039e792cde       storage-provisioner
	21116d18d7630       1cf5f116067c6       5 minutes ago       Running             coredns                   2                   b59efdf82bd17       coredns-674b8bbfcf-qmnmt
	07dd8c8aa6a91       ee794efa53d85       5 minutes ago       Running             kube-apiserver            0                   49dbb3d5d2cc7       kube-apiserver-functional-125151
	edb3deacb53f2       ff4f56c76b82d       5 minutes ago       Running             kube-controller-manager   2                   9610c3cebca33       kube-controller-manager-functional-125151
	f1557cb15c290       499038711c081       5 minutes ago       Running             etcd                      2                   25facd404c402       etcd-functional-125151
	73c1627d4b495       cfed1ff748928       5 minutes ago       Running             kube-scheduler            2                   f9249908d2520       kube-scheduler-functional-125151
	3c2e47556d24f       661d404f36f01       5 minutes ago       Running             kube-proxy                2                   1ac5ec4d36d62       kube-proxy-dkpz5
	c67723a4d954e       ff4f56c76b82d       6 minutes ago       Exited              kube-controller-manager   1                   9610c3cebca33       kube-controller-manager-functional-125151
	5f90b67e84005       499038711c081       6 minutes ago       Exited              etcd                      1                   25facd404c402       etcd-functional-125151
	967a91dd434fc       661d404f36f01       7 minutes ago       Exited              kube-proxy                1                   1ac5ec4d36d62       kube-proxy-dkpz5
	2e95eb2be24db       cfed1ff748928       7 minutes ago       Exited              kube-scheduler            1                   f9249908d2520       kube-scheduler-functional-125151
	aac50caa15f09       1cf5f116067c6       7 minutes ago       Exited              coredns                   1                   b59efdf82bd17       coredns-674b8bbfcf-qmnmt
	
	
	==> containerd <==
	Jun 30 14:30:49 functional-125151 containerd[4517]: time="2025-06-30T14:30:49.053243275Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Jun 30 14:30:49 functional-125151 containerd[4517]: time="2025-06-30T14:30:49.056197123Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:30:49 functional-125151 containerd[4517]: time="2025-06-30T14:30:49.143993506Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:30:49 functional-125151 containerd[4517]: time="2025-06-30T14:30:49.267896949Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:30:49 functional-125151 containerd[4517]: time="2025-06-30T14:30:49.268042224Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Jun 30 14:30:54 functional-125151 containerd[4517]: time="2025-06-30T14:30:54.052777275Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Jun 30 14:30:54 functional-125151 containerd[4517]: time="2025-06-30T14:30:54.055967777Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:30:54 functional-125151 containerd[4517]: time="2025-06-30T14:30:54.152513859Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:30:54 functional-125151 containerd[4517]: time="2025-06-30T14:30:54.277177754Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:30:54 functional-125151 containerd[4517]: time="2025-06-30T14:30:54.277321676Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Jun 30 14:32:15 functional-125151 containerd[4517]: time="2025-06-30T14:32:15.053822629Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Jun 30 14:32:15 functional-125151 containerd[4517]: time="2025-06-30T14:32:15.060292084Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:32:15 functional-125151 containerd[4517]: time="2025-06-30T14:32:15.148476065Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:32:15 functional-125151 containerd[4517]: time="2025-06-30T14:32:15.276170174Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:32:15 functional-125151 containerd[4517]: time="2025-06-30T14:32:15.276306365Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Jun 30 14:32:16 functional-125151 containerd[4517]: time="2025-06-30T14:32:16.053350272Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Jun 30 14:32:16 functional-125151 containerd[4517]: time="2025-06-30T14:32:16.056668824Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:32:16 functional-125151 containerd[4517]: time="2025-06-30T14:32:16.133372887Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:32:16 functional-125151 containerd[4517]: time="2025-06-30T14:32:16.258367418Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:32:16 functional-125151 containerd[4517]: time="2025-06-30T14:32:16.258549135Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Jun 30 14:32:21 functional-125151 containerd[4517]: time="2025-06-30T14:32:21.051670046Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Jun 30 14:32:21 functional-125151 containerd[4517]: time="2025-06-30T14:32:21.054826602Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:32:21 functional-125151 containerd[4517]: time="2025-06-30T14:32:21.144343017Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jun 30 14:32:21 functional-125151 containerd[4517]: time="2025-06-30T14:32:21.267140866Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Jun 30 14:32:21 functional-125151 containerd[4517]: time="2025-06-30T14:32:21.267168760Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	
	
	==> coredns [21116d18d7630ece7b45a2ee2c7bbf73b421476069c73da028d8b73e67bb09ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:51945 - 37682 "HINFO IN 6126998910890926596.6285762505447479733. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04871937s
	
	
	==> coredns [aac50caa15f09320d4a9b38cccf0768369c06dc00b24366e075eb4648fcbf2e4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:39450 - 45671 "HINFO IN 8125507374213840922.8135354278371011317. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017969729s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-125151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-125151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d123085232072938407f243f9b31470aa85634ff
	                    minikube.k8s.io/name=functional-125151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_30T14_26_39_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Jun 2025 14:26:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-125151
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Jun 2025 14:34:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Jun 2025 14:29:42 +0000   Mon, 30 Jun 2025 14:26:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Jun 2025 14:29:42 +0000   Mon, 30 Jun 2025 14:26:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Jun 2025 14:29:42 +0000   Mon, 30 Jun 2025 14:26:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Jun 2025 14:29:42 +0000   Mon, 30 Jun 2025 14:26:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    functional-125151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4011044Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b6f880623284834b0f87cf13346e20b
	  System UUID:                8b6f8806-2328-4834-b0f8-7cf13346e20b
	  Boot ID:                    43d0d41d-9975-48a0-aeaa-9947c4b25fbc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-j5qrg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  default                     hello-node-fcfd88b6f-ndkbs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     mysql-58ccfd96bb-gvqrl                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m57s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 coredns-674b8bbfcf-qmnmt                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m32s
	  kube-system                 etcd-functional-125151                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m37s
	  kube-system                 kube-apiserver-functional-125151              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-functional-125151     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-proxy-dkpz5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-scheduler-functional-125151              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-j4f27    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-w72v9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  Starting                 6m23s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m44s (x7 over 7m44s)  kubelet          Node functional-125151 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m44s (x8 over 7m44s)  kubelet          Node functional-125151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m44s (x8 over 7m44s)  kubelet          Node functional-125151 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m37s                  kubelet          Node functional-125151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s                  kubelet          Node functional-125151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s                  kubelet          Node functional-125151 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m37s                  kubelet          Node functional-125151 status is now: NodeReady
	  Normal  RegisteredNode           7m34s                  node-controller  Node functional-125151 event: Registered Node functional-125151 in Controller
	  Normal  CIDRAssignmentFailed     7m34s                  cidrAllocator    Node functional-125151 status is now: CIDRAssignmentFailed
	  Normal  Starting                 6m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m56s (x8 over 6m56s)  kubelet          Node functional-125151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m56s (x8 over 6m56s)  kubelet          Node functional-125151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m56s (x7 over 6m56s)  kubelet          Node functional-125151 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m29s                  node-controller  Node functional-125151 event: Registered Node functional-125151 in Controller
	  Normal  Starting                 5m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m39s (x8 over 5m39s)  kubelet          Node functional-125151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m39s (x8 over 5m39s)  kubelet          Node functional-125151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s (x7 over 5m39s)  kubelet          Node functional-125151 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m32s                  node-controller  Node functional-125151 event: Registered Node functional-125151 in Controller
	
	
	==> dmesg <==
	[  +0.004270] (rpcbind)[143]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.167162] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.080956] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.109701] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.093600] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.145351] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.621904] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.686502] kauditd_printk_skb: 101 callbacks suppressed
	[Jun30 14:27] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.059551] kauditd_printk_skb: 40 callbacks suppressed
	[  +9.811301] kauditd_printk_skb: 8 callbacks suppressed
	[ +18.635899] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.175212] kauditd_printk_skb: 6 callbacks suppressed
	[Jun30 14:28] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.995563] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.372871] kauditd_printk_skb: 34 callbacks suppressed
	[  +4.201631] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.116215] kauditd_printk_skb: 10 callbacks suppressed
	[Jun30 14:29] kauditd_printk_skb: 55 callbacks suppressed
	[  +0.000052] kauditd_printk_skb: 20 callbacks suppressed
	[  +2.430598] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.358694] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.681814] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [5f90b67e8400578ad2bda68ba8c7466850e90c67bbbf3a68c35af11e490ffc85] <==
	{"level":"info","ts":"2025-06-30T14:27:42.203718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-06-30T14:27:42.203746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2025-06-30T14:27:42.203767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:27:42.203822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2025-06-30T14:27:42.203840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 3"}
	{"level":"info","ts":"2025-06-30T14:27:42.203867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2025-06-30T14:27:42.206798Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:functional-125151 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:27:42.206813Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:27:42.206835Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:27:42.207563Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:27:42.207606Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:27:42.208149Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:27:42.208381Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:27:42.208873Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:27:42.209368Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2025-06-30T14:28:29.804559Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-06-30T14:28:29.804672Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-125151","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	{"level":"warn","ts":"2025-06-30T14:28:29.806601Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:28:29.806680Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:28:29.806729Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-30T14:28:29.806739Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"info","ts":"2025-06-30T14:28:29.806760Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602226ed500416f5","current-leader-member-id":"602226ed500416f5"}
	{"level":"info","ts":"2025-06-30T14:28:29.811124Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2025-06-30T14:28:29.811381Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2025-06-30T14:28:29.811393Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-125151","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	
	
	==> etcd [f1557cb15c290ff184a0b3c104cf865d511b53967295057a320ce0a40db3d55f] <==
	{"level":"info","ts":"2025-06-30T14:28:37.781107Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2025-06-30T14:28:39.653826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 is starting a new election at term 3"}
	{"level":"info","ts":"2025-06-30T14:28:39.653893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-06-30T14:28:39.653983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2025-06-30T14:28:39.654000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 4"}
	{"level":"info","ts":"2025-06-30T14:28:39.654133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 4"}
	{"level":"info","ts":"2025-06-30T14:28:39.654187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 4"}
	{"level":"info","ts":"2025-06-30T14:28:39.654214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 4"}
	{"level":"info","ts":"2025-06-30T14:28:39.656274Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:28:39.656225Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:functional-125151 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-30T14:28:39.657259Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:28:39.657545Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-30T14:28:39.657865Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-30T14:28:39.657899Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-30T14:28:39.658336Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-30T14:28:39.659439Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-30T14:28:39.660479Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2025-06-30T14:29:18.953404Z","caller":"traceutil/trace.go:171","msg":"trace[62728541] linearizableReadLoop","detail":"{readStateIndex:945; appliedIndex:944; }","duration":"249.979804ms","start":"2025-06-30T14:29:18.703383Z","end":"2025-06-30T14:29:18.953363Z","steps":["trace[62728541] 'read index received'  (duration: 249.754449ms)","trace[62728541] 'applied index is now lower than readState.Index'  (duration: 224.806µs)"],"step_count":2}
	{"level":"info","ts":"2025-06-30T14:29:18.953581Z","caller":"traceutil/trace.go:171","msg":"trace[368384459] transaction","detail":"{read_only:false; response_revision:871; number_of_response:1; }","duration":"265.470693ms","start":"2025-06-30T14:29:18.688096Z","end":"2025-06-30T14:29:18.953567Z","steps":["trace[368384459] 'process raft request'  (duration: 265.083484ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:29:18.953730Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.520272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:799"}
	{"level":"info","ts":"2025-06-30T14:29:18.953834Z","caller":"traceutil/trace.go:171","msg":"trace[590415687] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:871; }","duration":"214.676961ms","start":"2025-06-30T14:29:18.739145Z","end":"2025-06-30T14:29:18.953822Z","steps":["trace[590415687] 'agreement among raft nodes before linearized reading'  (duration: 214.51316ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:29:18.953909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.52352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/hello-node\" limit:1 ","response":"range_response_count:1 size:677"}
	{"level":"info","ts":"2025-06-30T14:29:18.953921Z","caller":"traceutil/trace.go:171","msg":"trace[1077019653] range","detail":"{range_begin:/registry/services/specs/default/hello-node; range_end:; response_count:1; response_revision:871; }","duration":"250.564077ms","start":"2025-06-30T14:29:18.703353Z","end":"2025-06-30T14:29:18.953917Z","steps":["trace[1077019653] 'agreement among raft nodes before linearized reading'  (duration: 250.534623ms)"],"step_count":1}
	{"level":"warn","ts":"2025-06-30T14:29:18.954089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.579125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-06-30T14:29:18.954105Z","caller":"traceutil/trace.go:171","msg":"trace[1705376667] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:871; }","duration":"103.618449ms","start":"2025-06-30T14:29:18.850481Z","end":"2025-06-30T14:29:18.954100Z","steps":["trace[1705376667] 'agreement among raft nodes before linearized reading'  (duration: 103.588598ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:34:16 up 8 min,  0 users,  load average: 0.02, 0.30, 0.21
	Linux functional-125151 5.10.207 #1 SMP Sun Jun 29 21:42:14 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [07dd8c8aa6a9144245df9e98c3df8e8f6e7560a028a130f64cb149472dae1a3b] <==
	I0630 14:28:41.833297       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0630 14:28:42.167485       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.24]
	I0630 14:28:42.169298       1 controller.go:667] quota admission added evaluator for: endpoints
	I0630 14:28:42.177416       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0630 14:28:42.735789       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0630 14:28:42.779448       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0630 14:28:42.813361       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0630 14:28:42.822394       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0630 14:28:44.214999       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:28:44.503192       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0630 14:29:01.132735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:01.138455       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.80.132"}
	I0630 14:29:04.605633       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:05.472917       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:05.476495       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.234.38"}
	I0630 14:29:06.774924       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.160.233"}
	I0630 14:29:06.789646       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:16.340769       1 controller.go:667] quota admission added evaluator for: namespaces
	I0630 14:29:16.900626       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.8.51"}
	I0630 14:29:16.901410       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0630 14:29:17.005262       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.126.193"}
	I0630 14:29:19.750619       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.112.22"}
	I0630 14:29:19.761049       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0630 14:29:26.974145       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8441->192.168.39.1:33506: use of closed network connection
	E0630 14:29:36.004711       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8441->192.168.39.1:59730: use of closed network connection
	
	
	==> kube-controller-manager [c67723a4d954e530144d97168acceb74399af22371755497eab0dc30d80e9a1d] <==
	I0630 14:27:46.945844       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0630 14:27:46.949919       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:27:46.967835       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0630 14:27:46.969257       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0630 14:27:46.982453       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0630 14:27:46.987223       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0630 14:27:46.989629       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0630 14:27:46.994743       1 shared_informer.go:357] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0630 14:27:46.996440       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0630 14:27:47.001254       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0630 14:27:47.002563       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0630 14:27:47.004970       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 14:27:47.007491       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0630 14:27:47.012098       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0630 14:27:47.012683       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0630 14:27:47.013224       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-125151"
	I0630 14:27:47.013484       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0630 14:27:47.016042       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 14:27:47.054208       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:27:47.056552       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0630 14:27:47.086197       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:27:47.492577       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:27:47.495041       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:27:47.495058       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:27:47.495065       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [edb3deacb53f21d000750c140b0e3fe3654e47a9f244a2eec91462b0f8bb396d] <==
	I0630 14:28:44.332593       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0630 14:28:44.347829       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0630 14:28:44.348033       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0630 14:28:44.405144       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0630 14:28:44.428052       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0630 14:28:44.469552       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0630 14:28:44.479017       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0630 14:28:44.484794       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0630 14:28:44.497804       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0630 14:28:44.504850       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:28:44.508013       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0630 14:28:44.924563       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0630 14:28:44.924585       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0630 14:28:44.924592       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0630 14:28:44.940504       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0630 14:29:16.531446       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.555244       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.556924       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.596034       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.596126       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.622076       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.622630       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.635207       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.641230       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0630 14:29:16.652326       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [3c2e47556d24faa3f4c5f91b515290eea69ccdac7e1fb152463a9f6fe822a067] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0630 14:28:31.095229       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": dial tcp 192.168.39.24:8441: connect: connection refused"
	E0630 14:28:32.123137       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": dial tcp 192.168.39.24:8441: connect: connection refused"
	E0630 14:28:34.303260       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": dial tcp 192.168.39.24:8441: connect: connection refused"
	I0630 14:28:40.954302       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0630 14:28:40.954378       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:28:41.143280       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:28:41.143329       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:28:41.143351       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:28:41.164372       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:28:41.164789       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:28:41.164819       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:28:41.177167       1 config.go:199] "Starting service config controller"
	I0630 14:28:41.177208       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:28:41.177231       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:28:41.177234       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:28:41.177243       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:28:41.177245       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:28:41.178447       1 config.go:329] "Starting node config controller"
	I0630 14:28:41.178476       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:28:41.278318       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:28:41.278464       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:28:41.278528       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:28:41.278778       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [967a91dd434fc5e80d9b82d830a43c62907afb84c4d5209a6566446eb9393fd5] <==
	E0630 14:27:08.511801       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": dial tcp 192.168.39.24:8441: connect: connection refused"
	E0630 14:27:09.630385       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": dial tcp 192.168.39.24:8441: connect: connection refused"
	E0630 14:27:11.836705       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": dial tcp 192.168.39.24:8441: connect: connection refused"
	E0630 14:27:16.138579       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": dial tcp 192.168.39.24:8441: connect: connection refused"
	E0630 14:27:34.142471       1 server.go:704] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-125151\": net/http: TLS handshake timeout"
	I0630 14:27:52.394089       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0630 14:27:52.394497       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0630 14:27:52.439712       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0630 14:27:52.439761       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0630 14:27:52.439781       1 server_linux.go:145] "Using iptables Proxier"
	I0630 14:27:52.450298       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0630 14:27:52.450552       1 server.go:516] "Version info" version="v1.33.2"
	I0630 14:27:52.450569       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0630 14:27:52.452388       1 config.go:199] "Starting service config controller"
	I0630 14:27:52.452535       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0630 14:27:52.452637       1 config.go:105] "Starting endpoint slice config controller"
	I0630 14:27:52.452751       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0630 14:27:52.452888       1 config.go:440] "Starting serviceCIDR config controller"
	I0630 14:27:52.452916       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0630 14:27:52.453901       1 config.go:329] "Starting node config controller"
	I0630 14:27:52.454098       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0630 14:27:52.553693       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0630 14:27:52.553751       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0630 14:27:52.553696       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0630 14:27:52.554375       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2e95eb2be24db46b6897859f2489953d37e018e633bbe7ec651aba3e2b7ee707] <==
	E0630 14:27:17.925658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.24:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:27:18.406896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.24:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:27:28.810408       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:27:29.211529       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:27:29.611415       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.24:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:27:30.129177       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.24:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:27:30.352301       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.24:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:27:33.939597       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:27:34.424759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.24:8441/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:27:34.950200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.24:8441/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:27:35.072690       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.24:8441/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:27:35.427341       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.24:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:27:37.085100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:27:37.366158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.24:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:27:38.821720       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.24:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 14:27:38.826902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.24:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:27:39.583685       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.24:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:27:39.912747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.24:35946->192.168.39.24:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:27:39.912901       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.24:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.24:37752->192.168.39.24:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:27:39.912907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.24:35918->192.168.39.24:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:27:39.913057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.24:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.24:35930->192.168.39.24:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:27:39.913110       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.39.24:37756->192.168.39.24:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:27:43.439364       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0630 14:27:57.729141       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0630 14:28:29.973170       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [73c1627d4b49516bdf56196eb75b90864f929ea8cbe3e36b90e4258ec119e912] <==
	E0630 14:28:35.107005       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.24:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:28:35.230004       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.24:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:28:35.240780       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.24:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0630 14:28:35.442788       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.24:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0630 14:28:35.448637       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.24:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:28:35.568895       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.24:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:28:35.634360       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.24:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:28:35.713362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.24:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0630 14:28:35.811648       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:28:35.859879       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.24:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:28:36.100989       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.24:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0630 14:28:36.217091       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.24:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0630 14:28:40.943723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0630 14:28:40.944516       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0630 14:28:40.944589       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0630 14:28:40.944635       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0630 14:28:40.944681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0630 14:28:40.944766       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0630 14:28:40.944812       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0630 14:28:40.944890       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0630 14:28:40.945026       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0630 14:28:40.945068       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0630 14:28:40.945114       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0630 14:28:40.945154       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I0630 14:28:40.954565       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Jun 30 14:32:33 functional-125151 kubelet[5371]: E0630 14:32:33.053817    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:32:37 functional-125151 kubelet[5371]: E0630 14:32:37.053847    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:32:40 functional-125151 kubelet[5371]: E0630 14:32:40.053148    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:32:48 functional-125151 kubelet[5371]: E0630 14:32:48.052545    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:32:50 functional-125151 kubelet[5371]: E0630 14:32:50.052361    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:32:54 functional-125151 kubelet[5371]: E0630 14:32:54.052425    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:33:01 functional-125151 kubelet[5371]: E0630 14:33:01.053069    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:33:02 functional-125151 kubelet[5371]: E0630 14:33:02.051381    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:33:08 functional-125151 kubelet[5371]: E0630 14:33:08.052284    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:33:12 functional-125151 kubelet[5371]: E0630 14:33:12.052697    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:33:14 functional-125151 kubelet[5371]: E0630 14:33:14.052300    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:33:19 functional-125151 kubelet[5371]: E0630 14:33:19.052333    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:33:23 functional-125151 kubelet[5371]: E0630 14:33:23.054422    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:33:28 functional-125151 kubelet[5371]: E0630 14:33:28.053664    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:33:34 functional-125151 kubelet[5371]: E0630 14:33:34.052419    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:33:34 functional-125151 kubelet[5371]: E0630 14:33:34.052442    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:33:40 functional-125151 kubelet[5371]: E0630 14:33:40.052158    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:33:45 functional-125151 kubelet[5371]: E0630 14:33:45.052671    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:33:49 functional-125151 kubelet[5371]: E0630 14:33:49.052108    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:33:55 functional-125151 kubelet[5371]: E0630 14:33:55.054274    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:33:56 functional-125151 kubelet[5371]: E0630 14:33:56.052273    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:34:04 functional-125151 kubelet[5371]: E0630 14:34:04.052416    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	Jun 30 14:34:07 functional-125151 kubelet[5371]: E0630 14:34:07.054551    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-gvqrl" podUID="f2721b67-e779-4b7c-8810-9f1e66861527"
	Jun 30 14:34:10 functional-125151 kubelet[5371]: E0630 14:34:10.052056    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-w72v9" podUID="658c447f-e12b-41f3-a56b-98a62e1fc8e6"
	Jun 30 14:34:15 functional-125151 kubelet[5371]: E0630 14:34:15.051616    5371 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-j4f27" podUID="6eb306cd-ae95-44a7-9135-7cd3
e286fa57"
	
	
	==> storage-provisioner [0dc4d97988fb9a02f25e214eb740551774a5259ee29ad2a24be1966d76030820] <==
	W0630 14:33:52.473018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:54.476178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:54.487546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:56.491815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:56.500569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:58.504274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:33:58.509843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:00.513480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:00.522894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:02.526387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:02.531377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:04.536186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:04.548173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:06.552129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:06.558392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:08.561891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:08.567262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:10.571815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:10.581547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:12.584901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:12.590869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:14.594083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:14.602741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:16.607547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0630 14:34:16.613003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9f17957351931cf43f7c920a9ecc8a1f44d82a874e41d0c09919d00e9d21c5cf] <==
	I0630 14:28:41.462856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0630 14:28:41.464429       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-125151 -n functional-125151
helpers_test.go:261: (dbg) Run:  kubectl --context functional-125151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-gvqrl dashboard-metrics-scraper-5d59dccf9b-j4f27 kubernetes-dashboard-7779f9b69b-w72v9
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-125151 describe pod busybox-mount mysql-58ccfd96bb-gvqrl dashboard-metrics-scraper-5d59dccf9b-j4f27 kubernetes-dashboard-7779f9b69b-w72v9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-125151 describe pod busybox-mount mysql-58ccfd96bb-gvqrl dashboard-metrics-scraper-5d59dccf9b-j4f27 kubernetes-dashboard-7779f9b69b-w72v9: exit status 1 (80.910575ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-125151/192.168.39.24
	Start Time:       Mon, 30 Jun 2025 14:29:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  containerd://a1fe7d267102e1ab228abcd7d07416a997cc4f203343d220ce3f1832618f73dc
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 30 Jun 2025 14:29:24 +0000
	      Finished:     Mon, 30 Jun 2025 14:29:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vv2c8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vv2c8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m56s  default-scheduler  Successfully assigned default/busybox-mount to functional-125151
	  Normal  Pulling    4m55s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.983s (1.983s including waiting). Image size: 2395207 bytes.
	  Normal  Created    4m53s  kubelet            Created container: mount-munger
	  Normal  Started    4m53s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-gvqrl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-125151/192.168.39.24
	Start Time:       Mon, 30 Jun 2025 14:29:19 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tj7bk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tj7bk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m58s                 default-scheduler  Successfully assigned default/mysql-58ccfd96bb-gvqrl to functional-125151
	  Normal   Pulling    116s (x5 over 4m57s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     116s (x5 over 4m57s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     116s (x5 over 4m57s)  kubelet            Error: ErrImagePull
	  Warning  Failed     63s (x15 over 4m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x19 over 4m56s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-j4f27" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-w72v9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-125151 describe pod busybox-mount mysql-58ccfd96bb-gvqrl dashboard-metrics-scraper-5d59dccf9b-j4f27 kubernetes-dashboard-7779f9b69b-w72v9: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                    

Test pass (286/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.05
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.16
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.33.2/json-events 4.06
13 TestDownloadOnly/v1.33.2/preload-exists 0
17 TestDownloadOnly/v1.33.2/LogsDuration 0.07
18 TestDownloadOnly/v1.33.2/DeleteAll 0.16
19 TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 62.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 138.02
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.52
35 TestAddons/parallel/Registry 14.91
36 TestAddons/parallel/RegistryCreds 0.7
38 TestAddons/parallel/InspektorGadget 11.8
39 TestAddons/parallel/MetricsServer 7.03
42 TestAddons/parallel/Headlamp 16.78
43 TestAddons/parallel/CloudSpanner 5.67
45 TestAddons/parallel/NvidiaDevicePlugin 6.54
46 TestAddons/parallel/Yakd 11.76
48 TestAddons/StoppedEnableDisable 91.34
49 TestCertOptions 74.11
50 TestCertExpiration 324.31
52 TestForceSystemdFlag 100.75
53 TestForceSystemdEnv 73.01
55 TestKVMDriverInstallOrUpdate 1.51
59 TestErrorSpam/setup 46.16
60 TestErrorSpam/start 0.39
61 TestErrorSpam/status 0.86
62 TestErrorSpam/pause 1.81
63 TestErrorSpam/unpause 1.93
64 TestErrorSpam/stop 5.72
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 64.17
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 69.51
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.13
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
76 TestFunctional/serial/CacheCmd/cache/add_local 1.04
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 43.81
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.46
87 TestFunctional/serial/LogsFileCmd 1.5
88 TestFunctional/serial/InvalidService 4.32
90 TestFunctional/parallel/ConfigCmd 0.38
92 TestFunctional/parallel/DryRun 0.31
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 1.14
98 TestFunctional/parallel/ServiceCmdConnect 10.83
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 29.81
102 TestFunctional/parallel/SSHCmd 0.5
103 TestFunctional/parallel/CpCmd 1.74
104 TestFunctional/parallel/MySQL 363.88
105 TestFunctional/parallel/FileSync 0.25
106 TestFunctional/parallel/CertSync 1.54
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
114 TestFunctional/parallel/License 0.14
115 TestFunctional/parallel/ServiceCmd/DeployApp 11.28
125 TestFunctional/parallel/Version/short 0.05
126 TestFunctional/parallel/Version/components 0.47
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
131 TestFunctional/parallel/ImageCommands/ImageBuild 3.38
132 TestFunctional/parallel/ImageCommands/Setup 0.44
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.22
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
140 TestFunctional/parallel/ServiceCmd/List 0.48
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.97
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
143 TestFunctional/parallel/ServiceCmd/Format 0.63
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
148 TestFunctional/parallel/ServiceCmd/URL 0.36
149 TestFunctional/parallel/ProfileCmd/profile_list 0.42
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
151 TestFunctional/parallel/MountCmd/any-port 7.57
152 TestFunctional/parallel/MountCmd/specific-port 1.96
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 217.33
162 TestMultiControlPlane/serial/DeployApp 5.3
163 TestMultiControlPlane/serial/PingHostFromPods 1.24
164 TestMultiControlPlane/serial/AddWorkerNode 49.89
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.99
167 TestMultiControlPlane/serial/CopyFile 14.49
168 TestMultiControlPlane/serial/StopSecondaryNode 91.74
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
170 TestMultiControlPlane/serial/RestartSecondaryNode 26.54
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 405.09
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.27
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
175 TestMultiControlPlane/serial/StopCluster 272.92
176 TestMultiControlPlane/serial/RestartCluster 123.09
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
178 TestMultiControlPlane/serial/AddSecondaryNode 79.27
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
183 TestJSONOutput/start/Command 57.94
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.76
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.36
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 96.57
215 TestMountStart/serial/StartWithMountFirst 29.1
216 TestMountStart/serial/VerifyMountFirst 0.4
217 TestMountStart/serial/StartWithMountSecond 27.98
218 TestMountStart/serial/VerifyMountSecond 0.4
219 TestMountStart/serial/DeleteFirst 0.75
220 TestMountStart/serial/VerifyMountPostDelete 0.4
221 TestMountStart/serial/Stop 1.46
222 TestMountStart/serial/RestartStopped 23.77
223 TestMountStart/serial/VerifyMountPostStop 0.41
226 TestMultiNode/serial/FreshStart2Nodes 114.13
227 TestMultiNode/serial/DeployApp2Nodes 4.3
228 TestMultiNode/serial/PingHostFrom2Pods 0.8
229 TestMultiNode/serial/AddNode 50.88
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.63
232 TestMultiNode/serial/CopyFile 7.91
233 TestMultiNode/serial/StopNode 2.47
234 TestMultiNode/serial/StartAfterStop 36.47
235 TestMultiNode/serial/RestartKeepsNodes 310.5
236 TestMultiNode/serial/DeleteNode 2.37
237 TestMultiNode/serial/StopMultiNode 182.01
238 TestMultiNode/serial/RestartMultiNode 86.58
239 TestMultiNode/serial/ValidateNameConflict 50.67
244 TestPreload 228.23
246 TestScheduledStopUnix 119.62
250 TestRunningBinaryUpgrade 204.1
252 TestKubernetesUpgrade 208.83
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 100.47
264 TestNetworkPlugins/group/false 3.47
268 TestStoppedBinaryUpgrade/Setup 0.43
269 TestStoppedBinaryUpgrade/Upgrade 182.37
270 TestNoKubernetes/serial/StartWithStopK8s 75.03
271 TestNoKubernetes/serial/Start 36.68
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
273 TestNoKubernetes/serial/ProfileList 7.1
282 TestPause/serial/Start 83.14
283 TestNoKubernetes/serial/Stop 1.59
284 TestNoKubernetes/serial/StartNoArgs 60.68
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
287 TestPause/serial/SecondStartNoReconfiguration 110.08
288 TestNetworkPlugins/group/auto/Start 91
289 TestNetworkPlugins/group/flannel/Start 77.36
290 TestPause/serial/Pause 0.9
291 TestPause/serial/VerifyStatus 0.29
292 TestPause/serial/Unpause 0.78
293 TestPause/serial/PauseAgain 0.95
294 TestPause/serial/DeletePaused 1.12
295 TestPause/serial/VerifyDeletedResources 2.39
296 TestNetworkPlugins/group/enable-default-cni/Start 68.88
297 TestNetworkPlugins/group/auto/KubeletFlags 0.25
298 TestNetworkPlugins/group/auto/NetCatPod 8.32
299 TestNetworkPlugins/group/auto/DNS 0.19
300 TestNetworkPlugins/group/auto/Localhost 0.18
301 TestNetworkPlugins/group/auto/HairPin 0.17
302 TestNetworkPlugins/group/bridge/Start 67.27
303 TestNetworkPlugins/group/flannel/ControllerPod 6.01
304 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
305 TestNetworkPlugins/group/flannel/NetCatPod 10.23
306 TestNetworkPlugins/group/flannel/DNS 0.2
307 TestNetworkPlugins/group/flannel/Localhost 0.17
308 TestNetworkPlugins/group/flannel/HairPin 0.15
309 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
310 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.31
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
314 TestNetworkPlugins/group/calico/Start 84.57
315 TestNetworkPlugins/group/kindnet/Start 86.2
316 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
317 TestNetworkPlugins/group/bridge/NetCatPod 9.23
318 TestNetworkPlugins/group/bridge/DNS 0.19
319 TestNetworkPlugins/group/bridge/Localhost 0.11
320 TestNetworkPlugins/group/bridge/HairPin 0.14
321 TestNetworkPlugins/group/custom-flannel/Start 95.66
322 TestNetworkPlugins/group/calico/ControllerPod 6.01
323 TestNetworkPlugins/group/calico/KubeletFlags 0.24
324 TestNetworkPlugins/group/calico/NetCatPod 10.31
326 TestStartStop/group/old-k8s-version/serial/FirstStart 150.22
327 TestNetworkPlugins/group/calico/DNS 0.22
328 TestNetworkPlugins/group/calico/Localhost 0.17
329 TestNetworkPlugins/group/calico/HairPin 0.16
330 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
331 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
332 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
333 TestNetworkPlugins/group/kindnet/DNS 0.2
335 TestStartStop/group/no-preload/serial/FirstStart 89.57
336 TestNetworkPlugins/group/kindnet/Localhost 0.19
337 TestNetworkPlugins/group/kindnet/HairPin 0.17
339 TestStartStop/group/embed-certs/serial/FirstStart 86.3
340 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
341 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.25
342 TestNetworkPlugins/group/custom-flannel/DNS 0.16
343 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
344 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.83
347 TestStartStop/group/no-preload/serial/DeployApp 8.35
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.35
349 TestStartStop/group/no-preload/serial/Stop 91.11
350 TestStartStop/group/embed-certs/serial/DeployApp 8.29
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
352 TestStartStop/group/embed-certs/serial/Stop 91.06
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
354 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
356 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.87
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.98
358 TestStartStop/group/old-k8s-version/serial/Stop 91.16
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
360 TestStartStop/group/no-preload/serial/SecondStart 46.44
361 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
362 TestStartStop/group/embed-certs/serial/SecondStart 51.03
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.73
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
366 TestStartStop/group/old-k8s-version/serial/SecondStart 148.39
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
370 TestStartStop/group/no-preload/serial/Pause 3.52
372 TestStartStop/group/newest-cni/serial/FirstStart 70.3
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
376 TestStartStop/group/embed-certs/serial/Pause 2.79
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
378 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.11
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
383 TestStartStop/group/newest-cni/serial/Stop 7.37
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
385 TestStartStop/group/newest-cni/serial/SecondStart 38.44
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
389 TestStartStop/group/newest-cni/serial/Pause 2.8
390 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
393 TestStartStop/group/old-k8s-version/serial/Pause 2.76
x
+
TestDownloadOnly/v1.20.0/json-events (10.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-083943 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-083943 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (10.048985977s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0630 14:06:00.407525 1459494 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0630 14:06:00.407637 1459494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-083943
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-083943: exit status 85 (65.517469ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:05 UTC |          |
	|         | -p download-only-083943        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:05:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:05:50.401925 1459506 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:05:50.402053 1459506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:05:50.402061 1459506 out.go:358] Setting ErrFile to fd 2...
	I0630 14:05:50.402066 1459506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:05:50.402238 1459506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	W0630 14:05:50.402375 1459506 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20991-1452140/.minikube/config/config.json: open /home/jenkins/minikube-integration/20991-1452140/.minikube/config/config.json: no such file or directory
	I0630 14:05:50.402945 1459506 out.go:352] Setting JSON to true
	I0630 14:05:50.403902 1459506 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49673,"bootTime":1751242677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:05:50.403974 1459506 start.go:140] virtualization: kvm guest
	I0630 14:05:50.406200 1459506 out.go:97] [download-only-083943] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0630 14:05:50.406435 1459506 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball: no such file or directory
	I0630 14:05:50.406472 1459506 notify.go:220] Checking for updates...
	I0630 14:05:50.407835 1459506 out.go:169] MINIKUBE_LOCATION=20991
	I0630 14:05:50.409044 1459506 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:05:50.410270 1459506 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:05:50.411372 1459506 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:05:50.412656 1459506 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0630 14:05:50.414803 1459506 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0630 14:05:50.415034 1459506 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:05:50.448654 1459506 out.go:97] Using the kvm2 driver based on user configuration
	I0630 14:05:50.448689 1459506 start.go:304] selected driver: kvm2
	I0630 14:05:50.448699 1459506 start.go:908] validating driver "kvm2" against <nil>
	I0630 14:05:50.449007 1459506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:05:50.449114 1459506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20991-1452140/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0630 14:05:50.452788 1459506 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0630 14:05:50.454210 1459506 out.go:97] Downloading driver docker-machine-driver-kvm2:
	I0630 14:05:50.454311 1459506 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:05:50.926817 1459506 start_flags.go:325] no existing cluster config was found, will generate one from the flags 
	I0630 14:05:50.927472 1459506 start_flags.go:408] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0630 14:05:50.927619 1459506 start_flags.go:972] Wait components to verify : map[apiserver:true system_pods:true]
	I0630 14:05:50.927650 1459506 cni.go:84] Creating CNI manager for ""
	I0630 14:05:50.927700 1459506 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0630 14:05:50.927709 1459506 start_flags.go:334] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0630 14:05:50.927771 1459506 start.go:347] cluster config:
	{Name:download-only-083943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-083943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:05:50.927944 1459506 iso.go:125] acquiring lock: {Name:mk3f178100d94eda06013511859d36adab64257f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0630 14:05:50.929811 1459506 out.go:97] Downloading VM boot image ...
	I0630 14:05:50.929869 1459506 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/iso/amd64/minikube-v1.36.0-1751221996-20991-amd64.iso
	
	
	* The control-plane node download-only-083943 host does not exist
	  To start a cluster, run: "minikube start -p download-only-083943"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-083943
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/json-events (4.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480082 --force --alsologtostderr --kubernetes-version=v1.33.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480082 --force --alsologtostderr --kubernetes-version=v1.33.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (4.056647421s)
--- PASS: TestDownloadOnly/v1.33.2/json-events (4.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/preload-exists
I0630 14:06:04.827545 1459494 preload.go:131] Checking if preload exists for k8s version v1.33.2 and runtime containerd
I0630 14:06:04.827616 1459494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20991-1452140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.33.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480082
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480082: exit status 85 (70.028237ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:05 UTC |                     |
	|         | -p download-only-083943        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| delete  | -p download-only-083943        | download-only-083943 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC | 30 Jun 25 14:06 UTC |
	| start   | -o=json --download-only        | download-only-480082 | jenkins | v1.36.0 | 30 Jun 25 14:06 UTC |                     |
	|         | -p download-only-480082        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/30 14:06:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0630 14:06:00.816754 1459699 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:06:00.817008 1459699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:00.817017 1459699 out.go:358] Setting ErrFile to fd 2...
	I0630 14:06:00.817023 1459699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:06:00.817246 1459699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:06:00.817872 1459699 out.go:352] Setting JSON to true
	I0630 14:06:00.818793 1459699 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49684,"bootTime":1751242677,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:06:00.818855 1459699 start.go:140] virtualization: kvm guest
	I0630 14:06:00.820650 1459699 out.go:97] [download-only-480082] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:06:00.820876 1459699 notify.go:220] Checking for updates...
	I0630 14:06:00.822190 1459699 out.go:169] MINIKUBE_LOCATION=20991
	I0630 14:06:00.823471 1459699 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:06:00.824953 1459699 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:06:00.826202 1459699 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:06:00.827402 1459699 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-480082 host does not exist
	  To start a cluster, run: "minikube start -p download-only-480082"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.33.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.33.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-480082
--- PASS: TestDownloadOnly/v1.33.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I0630 14:06:05.472954 1459494 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.33.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-278166 --alsologtostderr --binary-mirror http://127.0.0.1:42597 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-278166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-278166
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (62.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-345672 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-345672 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.686031755s)
helpers_test.go:175: Cleaning up "offline-containerd-345672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-345672
--- PASS: TestOffline (62.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-412730
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-412730: exit status 85 (62.850387ms)

                                                
                                                
-- stdout --
	* Profile "addons-412730" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-412730"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-412730
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-412730: exit status 85 (62.053733ms)

                                                
                                                
-- stdout --
	* Profile "addons-412730" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-412730"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (138.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-412730 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-412730 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m18.02423279s)
--- PASS: TestAddons/Setup (138.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-412730 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-412730 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-412730 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-412730 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cf724ec7-6613-40ad-995e-b2d214ca8c8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cf724ec7-6613-40ad-995e-b2d214ca8c8d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003721081s
addons_test.go:694: (dbg) Run:  kubectl --context addons-412730 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-412730 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-412730 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.255355ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-694bd45846-xjdfn" [2538157e-75f2-429a-9ee9-dcbb6f56a814] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005114287s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dzp7x" [52f4bc70-5ad7-47f4-bd99-fc5cd471afab] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003641086s
addons_test.go:392: (dbg) Run:  kubectl --context addons-412730 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-412730 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-412730 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.085114708s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 ip
2025/06/30 14:15:10 [DEBUG] GET http://192.168.39.114:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.91s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.001389ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-412730
addons_test.go:332: (dbg) Run:  kubectl --context addons-412730 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xjkv5" [db71aa18-e2df-45dc-b69f-a6c5ad147ed0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005033112s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable inspektor-gadget --alsologtostderr -v=1: (5.793648187s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.440591ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0630 14:14:56.689145 1459494 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0630 14:14:56.689199 1459494 kapi.go:107] duration metric: took 6.224652ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-kjqlg" [517ec2e4-c4bc-45b6-ada2-68d1e16b2f19] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.066691704s
addons_test.go:463: (dbg) Run:  kubectl --context addons-412730 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-412730 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-9lcf6" [28c20db6-3e14-4ba8-9988-6a9e84dde175] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-9lcf6" [28c20db6-3e14-4ba8-9988-6a9e84dde175] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-9lcf6" [28c20db6-3e14-4ba8-9988-6a9e84dde175] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-9lcf6" [28c20db6-3e14-4ba8-9988-6a9e84dde175] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.005915559s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable headlamp --alsologtostderr -v=1: (5.809854506s)
--- PASS: TestAddons/parallel/Headlamp (16.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6d967984f9-gqgvc" [0920ab8a-8a65-4046-bebe-4d3e25cc6f9a] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005607452s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-x5r2c" [b30b72eb-28c1-4e3a-972e-9db47c66ac6f] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003775462s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-7594f" [0f10801f-f7d7-41fb-aff6-2b5831df20f5] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004646667s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-412730 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-412730 addons disable yakd --alsologtostderr -v=1: (5.758118202s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-412730
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-412730: (1m31.013085013s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-412730
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-412730
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-412730
--- PASS: TestAddons/StoppedEnableDisable (91.34s)

                                                
                                    
x
+
TestCertOptions (74.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-522160 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-522160 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m12.549216409s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-522160 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-522160 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-522160 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-522160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-522160
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-522160: (1.085446188s)
--- PASS: TestCertOptions (74.11s)

                                                
                                    
x
+
TestCertExpiration (324.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-733841 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-733841 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m25.050575649s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-733841 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-733841 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (57.592333167s)
helpers_test.go:175: Cleaning up "cert-expiration-733841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-733841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-733841: (1.669746793s)
--- PASS: TestCertExpiration (324.31s)

                                                
                                    
x
+
TestForceSystemdFlag (100.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-205861 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-205861 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m39.674338068s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-205861 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-205861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-205861
--- PASS: TestForceSystemdFlag (100.75s)

                                                
                                    
x
+
TestForceSystemdEnv (73.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-519372 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-519372 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m11.936037997s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-519372 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-519372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-519372
--- PASS: TestForceSystemdEnv (73.01s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0630 15:21:27.284180 1459494 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0630 15:21:27.284483 1459494 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0630 15:21:27.317649 1459494 install.go:62] docker-machine-driver-kvm2: exit status 1
W0630 15:21:27.317880 1459494 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0630 15:21:27.317973 1459494 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2620833769/001/docker-machine-driver-kvm2
I0630 15:21:27.546633 1459494 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2620833769/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720] Decompressors:map[bz2:0xc0004abc70 gz:0xc0004abc78 tar:0xc0004abc10 tar.bz2:0xc0004abc20 tar.gz:0xc0004abc40 tar.xz:0xc0004abc50 tar.zst:0xc0004abc60 tbz2:0xc0004abc20 tgz:0xc0004abc40 txz:0xc0004abc50 tzst:0xc0004abc60 xz:0xc0004abc80 zip:0xc0004abc90 zst:0xc0004abc88] Getters:map[file:0xc0017b69c0 http:0xc000980a50 https:0xc000980aa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0630 15:21:27.546691 1459494 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2620833769/001/docker-machine-driver-kvm2
I0630 15:21:28.271452 1459494 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0630 15:21:28.271587 1459494 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0630 15:21:28.305012 1459494 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0630 15:21:28.305056 1459494 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0630 15:21:28.305145 1459494 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0630 15:21:28.305217 1459494 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2620833769/002/docker-machine-driver-kvm2
I0630 15:21:28.331133 1459494 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2620833769/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720 0x57df720] Decompressors:map[bz2:0xc0004abc70 gz:0xc0004abc78 tar:0xc0004abc10 tar.bz2:0xc0004abc20 tar.gz:0xc0004abc40 tar.xz:0xc0004abc50 tar.zst:0xc0004abc60 tbz2:0xc0004abc20 tgz:0xc0004abc40 txz:0xc0004abc50 tzst:0xc0004abc60 xz:0xc0004abc80 zip:0xc0004abc90 zst:0xc0004abc88] Getters:map[file:0xc0017b6240 http:0xc000980500 https:0xc000980550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0630 15:21:28.331190 1459494 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2620833769/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.51s)

                                                
                                    
x
+
TestErrorSpam/setup (46.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-644252 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-644252 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-644252 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-644252 --driver=kvm2  --container-runtime=containerd: (46.164684161s)
--- PASS: TestErrorSpam/setup (46.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (5.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 stop: (2.348506419s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 stop: (1.428534445s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-644252 --log_dir /tmp/nospam-644252 stop: (1.941903912s)
--- PASS: TestErrorSpam/stop (5.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20991-1452140/.minikube/files/etc/test/nested/copy/1459494/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-125151 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-125151 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m4.16610857s)
--- PASS: TestFunctional/serial/StartWithProxy (64.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (69.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0630 14:26:57.862827 1459494 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-125151 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-125151 --alsologtostderr -v=8: (1m9.505161337s)
functional_test.go:680: soft start took 1m9.505994877s for "functional-125151" cluster.
I0630 14:28:07.368420 1459494 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestFunctional/serial/SoftStart (69.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-125151 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 cache add registry.k8s.io/pause:3.1: (1.023802497s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 cache add registry.k8s.io/pause:3.3: (1.064031448s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 cache add registry.k8s.io/pause:latest: (1.006131646s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-125151 /tmp/TestFunctionalserialCacheCmdcacheadd_local2583644603/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cache add minikube-local-cache-test:functional-125151
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cache delete minikube-local-cache-test:functional-125151
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-125151
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (229.077213ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 kubectl -- --context functional-125151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-125151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-125151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0630 14:28:24.225428 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:24.231937 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:24.243450 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:24.264952 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:24.306515 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:24.388071 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:24.549693 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:24.871541 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:25.513674 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:26.795411 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:29.356854 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:34.478443 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:28:44.720837 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-125151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.80885138s)
functional_test.go:778: restart took 43.809013071s for "functional-125151" cluster.
I0630 14:28:57.891792 1459494 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestFunctional/serial/ExtraConfig (43.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-125151 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 logs: (1.459508921s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 logs --file /tmp/TestFunctionalserialLogsFileCmd3151820040/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 logs --file /tmp/TestFunctionalserialLogsFileCmd3151820040/001/logs.txt: (1.499391084s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-125151 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-125151
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-125151: exit status 115 (299.842222ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.24:32170 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-125151 delete -f testdata/invalidsvc.yaml
E0630 14:29:05.202559 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 config get cpus: exit status 14 (73.319229ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 config get cpus: exit status 14 (58.752971ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-125151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-125151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (151.097293ms)

                                                
                                                
-- stdout --
	* [functional-125151] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 14:29:14.504584 1472852 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:29:14.504884 1472852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:29:14.504897 1472852 out.go:358] Setting ErrFile to fd 2...
	I0630 14:29:14.504901 1472852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:29:14.505127 1472852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:29:14.505763 1472852 out.go:352] Setting JSON to false
	I0630 14:29:14.506816 1472852 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":51077,"bootTime":1751242677,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:29:14.506886 1472852 start.go:140] virtualization: kvm guest
	I0630 14:29:14.508986 1472852 out.go:177] * [functional-125151] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 14:29:14.510433 1472852 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:29:14.510478 1472852 notify.go:220] Checking for updates...
	I0630 14:29:14.513381 1472852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:29:14.514636 1472852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:29:14.515727 1472852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:29:14.516817 1472852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:29:14.518018 1472852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:29:14.519639 1472852 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:29:14.520095 1472852 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:29:14.520186 1472852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:29:14.537662 1472852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0630 14:29:14.538296 1472852 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:29:14.538906 1472852 main.go:141] libmachine: Using API Version  1
	I0630 14:29:14.538928 1472852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:29:14.539360 1472852 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:29:14.539627 1472852 main.go:141] libmachine: (functional-125151) Calling .DriverName
	I0630 14:29:14.539938 1472852 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:29:14.540343 1472852 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:29:14.540396 1472852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:29:14.557091 1472852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36463
	I0630 14:29:14.557672 1472852 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:29:14.558169 1472852 main.go:141] libmachine: Using API Version  1
	I0630 14:29:14.558188 1472852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:29:14.558596 1472852 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:29:14.558885 1472852 main.go:141] libmachine: (functional-125151) Calling .DriverName
	I0630 14:29:14.595145 1472852 out.go:177] * Using the kvm2 driver based on existing profile
	I0630 14:29:14.596407 1472852 start.go:304] selected driver: kvm2
	I0630 14:29:14.596427 1472852 start.go:908] validating driver "kvm2" against &{Name:functional-125151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-125151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:29:14.596612 1472852 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:29:14.599289 1472852 out.go:201] 
	W0630 14:29:14.600561 1472852 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0630 14:29:14.601870 1472852 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-125151 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-125151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-125151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (163.194848ms)

                                                
                                                
-- stdout --
	* [functional-125151] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 14:29:14.815262 1472909 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:29:14.815752 1472909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:29:14.815777 1472909 out.go:358] Setting ErrFile to fd 2...
	I0630 14:29:14.815787 1472909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:29:14.816402 1472909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:29:14.817740 1472909 out.go:352] Setting JSON to false
	I0630 14:29:14.818807 1472909 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":51078,"bootTime":1751242677,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 14:29:14.818935 1472909 start.go:140] virtualization: kvm guest
	I0630 14:29:14.820495 1472909 out.go:177] * [functional-125151] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0630 14:29:14.822313 1472909 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 14:29:14.822368 1472909 notify.go:220] Checking for updates...
	I0630 14:29:14.824807 1472909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 14:29:14.826240 1472909 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 14:29:14.827465 1472909 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 14:29:14.828804 1472909 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 14:29:14.830177 1472909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 14:29:14.831840 1472909 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:29:14.832261 1472909 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:29:14.832329 1472909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:29:14.850988 1472909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I0630 14:29:14.851537 1472909 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:29:14.852287 1472909 main.go:141] libmachine: Using API Version  1
	I0630 14:29:14.852315 1472909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:29:14.852779 1472909 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:29:14.853027 1472909 main.go:141] libmachine: (functional-125151) Calling .DriverName
	I0630 14:29:14.853327 1472909 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 14:29:14.853646 1472909 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:29:14.853689 1472909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:29:14.870691 1472909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0630 14:29:14.871295 1472909 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:29:14.871932 1472909 main.go:141] libmachine: Using API Version  1
	I0630 14:29:14.871955 1472909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:29:14.872326 1472909 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:29:14.872558 1472909 main.go:141] libmachine: (functional-125151) Calling .DriverName
	I0630 14:29:14.910919 1472909 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0630 14:29:14.913336 1472909 start.go:304] selected driver: kvm2
	I0630 14:29:14.913370 1472909 start.go:908] validating driver "kvm2" against &{Name:functional-125151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20991/minikube-v1.36.0-1751221996-20991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.2 Clu
sterName:functional-125151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8441 KubernetesVersion:v1.33.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0630 14:29:14.913522 1472909 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 14:29:14.916006 1472909 out.go:201] 
	W0630 14:29:14.917527 1472909 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0630 14:29:14.918986 1472909 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-125151 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-125151 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-j5qrg" [ba2d4ce4-376f-49cf-88bb-09319008433a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-j5qrg" [ba2d4ce4-376f-49cf-88bb-09319008433a] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.049544618s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.24:30312
functional_test.go:1692: http://192.168.39.24:30312: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-j5qrg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.24:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.24:30312
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7bf8c714-bb9a-4cd1-b63f-483d7e7e8361] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003838626s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-125151 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-125151 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-125151 get pvc myclaim -o=json
I0630 14:29:11.471576 1459494 retry.go:31] will retry after 2.194494454s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:7085b39e-d1d6-4790-bc30-e26ba5513997 ResourceVersion:794 Generation:0 CreationTimestamp:2025-06-30 14:29:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-7085b39e-d1d6-4790-bc30-e26ba5513997 StorageClassName:0xc001aa2190 VolumeMode:0xc001aa21a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-125151 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-125151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7d2f36ab-990a-48f6-bab7-4e8c13fe4cd0] Pending
helpers_test.go:344: "sp-pod" [7d2f36ab-990a-48f6-bab7-4e8c13fe4cd0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7d2f36ab-990a-48f6-bab7-4e8c13fe4cd0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003402332s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-125151 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-125151 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-125151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1ab54f77-ca1c-4674-a230-8c9e4362c72c] Pending
helpers_test.go:344: "sp-pod" [1ab54f77-ca1c-4674-a230-8c9e4362c72c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1ab54f77-ca1c-4674-a230-8c9e4362c72c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00338315s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-125151 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh -n functional-125151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cp functional-125151:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd502716733/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh -n functional-125151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh -n functional-125151 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (363.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-125151 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-gvqrl" [f2721b67-e779-4b7c-8810-9f1e66861527] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-gvqrl" [f2721b67-e779-4b7c-8810-9f1e66861527] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 6m0.005053713s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-125151 exec mysql-58ccfd96bb-gvqrl -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-125151 exec mysql-58ccfd96bb-gvqrl -- mysql -ppassword -e "show databases;": exit status 1 (159.62197ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0630 14:35:19.988760 1459494 retry.go:31] will retry after 1.495942409s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-125151 exec mysql-58ccfd96bb-gvqrl -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-125151 exec mysql-58ccfd96bb-gvqrl -- mysql -ppassword -e "show databases;": exit status 1 (308.481233ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0630 14:35:21.793992 1459494 retry.go:31] will retry after 1.560699945s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-125151 exec mysql-58ccfd96bb-gvqrl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (363.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1459494/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo cat /etc/test/nested/copy/1459494/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1459494.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo cat /etc/ssl/certs/1459494.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1459494.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo cat /usr/share/ca-certificates/1459494.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/14594942.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo cat /etc/ssl/certs/14594942.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/14594942.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo cat /usr/share/ca-certificates/14594942.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-125151 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh "sudo systemctl is-active docker": exit status 1 (236.052153ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh "sudo systemctl is-active crio": exit status 1 (226.075833ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-125151 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-125151 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-ndkbs" [9a377cf1-ea23-4d43-961d-d208c73c95ad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-ndkbs" [9a377cf1-ea23-4d43-961d-d208c73c95ad] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.026069026s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-125151 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.33.2
registry.k8s.io/kube-proxy:v1.33.2
registry.k8s.io/kube-controller-manager:v1.33.2
registry.k8s.io/kube-apiserver:v1.33.2
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.12.0
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-125151
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-125151
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-125151 image ls --format short --alsologtostderr:
I0630 14:29:31.847395 1474299 out.go:345] Setting OutFile to fd 1 ...
I0630 14:29:31.847788 1474299 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:31.847827 1474299 out.go:358] Setting ErrFile to fd 2...
I0630 14:29:31.847841 1474299 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:31.848409 1474299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
I0630 14:29:31.849081 1474299 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:31.849213 1474299 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:31.849610 1474299 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:31.849651 1474299 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:31.865677 1474299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46387
I0630 14:29:31.866219 1474299 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:31.866740 1474299 main.go:141] libmachine: Using API Version  1
I0630 14:29:31.866765 1474299 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:31.867121 1474299 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:31.867342 1474299 main.go:141] libmachine: (functional-125151) Calling .GetState
I0630 14:29:31.869355 1474299 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:31.869401 1474299 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:31.886546 1474299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
I0630 14:29:31.887044 1474299 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:31.887595 1474299 main.go:141] libmachine: Using API Version  1
I0630 14:29:31.887624 1474299 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:31.887958 1474299 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:31.888220 1474299 main.go:141] libmachine: (functional-125151) Calling .DriverName
I0630 14:29:31.888469 1474299 ssh_runner.go:195] Run: systemctl --version
I0630 14:29:31.888499 1474299 main.go:141] libmachine: (functional-125151) Calling .GetSSHHostname
I0630 14:29:31.891584 1474299 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:31.892104 1474299 main.go:141] libmachine: (functional-125151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:6b", ip: ""} in network mk-functional-125151: {Iface:virbr1 ExpiryTime:2025-06-30 15:26:09 +0000 UTC Type:0 Mac:52:54:00:78:c3:6b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:functional-125151 Clientid:01:52:54:00:78:c3:6b}
I0630 14:29:31.892132 1474299 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:31.892364 1474299 main.go:141] libmachine: (functional-125151) Calling .GetSSHPort
I0630 14:29:31.892535 1474299 main.go:141] libmachine: (functional-125151) Calling .GetSSHKeyPath
I0630 14:29:31.892682 1474299 main.go:141] libmachine: (functional-125151) Calling .GetSSHUsername
I0630 14:29:31.892859 1474299 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/functional-125151/id_rsa Username:docker}
I0630 14:29:31.977563 1474299 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:29:32.022236 1474299 main.go:141] libmachine: Making call to close driver server
I0630 14:29:32.022253 1474299 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:32.022663 1474299 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:32.022686 1474299 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:32.022691 1474299 main.go:141] libmachine: (functional-125151) DBG | Closing plugin on server side
I0630 14:29:32.022703 1474299 main.go:141] libmachine: Making call to close driver server
I0630 14:29:32.022713 1474299 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:32.023004 1474299 main.go:141] libmachine: (functional-125151) DBG | Closing plugin on server side
I0630 14:29:32.023026 1474299 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:32.023045 1474299 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-125151 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.33.2            | sha256:661d40 | 31.9MB |
| docker.io/kicbase/echo-server               | functional-125151  | sha256:9056ab | 2.37MB |
| docker.io/kindest/kindnetd                  | v20250512-df8de77b | sha256:409467 | 44.4MB |
| docker.io/library/minikube-local-cache-test | functional-125151  | sha256:554320 | 992B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.12.0            | sha256:1cf5f1 | 20.9MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-scheduler              | v1.33.2            | sha256:cfed1f | 21.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-apiserver              | v1.33.2            | sha256:ee794e | 30.1MB |
| docker.io/library/nginx                     | latest             | sha256:9a9a9f | 72.2MB |
| localhost/my-image                          | functional-125151  | sha256:36cc98 | 775kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/kube-controller-manager     | v1.33.2            | sha256:ff4f56 | 27.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/etcd                        | 3.5.21-0           | sha256:499038 | 58.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-125151 image ls --format table --alsologtostderr:
I0630 14:29:35.975998 1474461 out.go:345] Setting OutFile to fd 1 ...
I0630 14:29:35.976328 1474461 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:35.976349 1474461 out.go:358] Setting ErrFile to fd 2...
I0630 14:29:35.976356 1474461 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:35.976706 1474461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
I0630 14:29:35.977810 1474461 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:35.977999 1474461 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:35.978598 1474461 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:35.978685 1474461 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:35.994994 1474461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
I0630 14:29:35.995635 1474461 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:35.996309 1474461 main.go:141] libmachine: Using API Version  1
I0630 14:29:35.996344 1474461 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:35.996808 1474461 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:35.997081 1474461 main.go:141] libmachine: (functional-125151) Calling .GetState
I0630 14:29:35.999797 1474461 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:35.999877 1474461 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:36.017661 1474461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35347
I0630 14:29:36.018233 1474461 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:36.018892 1474461 main.go:141] libmachine: Using API Version  1
I0630 14:29:36.018928 1474461 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:36.019400 1474461 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:36.019634 1474461 main.go:141] libmachine: (functional-125151) Calling .DriverName
I0630 14:29:36.019880 1474461 ssh_runner.go:195] Run: systemctl --version
I0630 14:29:36.019906 1474461 main.go:141] libmachine: (functional-125151) Calling .GetSSHHostname
I0630 14:29:36.023872 1474461 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:36.024572 1474461 main.go:141] libmachine: (functional-125151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:6b", ip: ""} in network mk-functional-125151: {Iface:virbr1 ExpiryTime:2025-06-30 15:26:09 +0000 UTC Type:0 Mac:52:54:00:78:c3:6b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:functional-125151 Clientid:01:52:54:00:78:c3:6b}
I0630 14:29:36.024616 1474461 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:36.024894 1474461 main.go:141] libmachine: (functional-125151) Calling .GetSSHPort
I0630 14:29:36.025120 1474461 main.go:141] libmachine: (functional-125151) Calling .GetSSHKeyPath
I0630 14:29:36.025334 1474461 main.go:141] libmachine: (functional-125151) Calling .GetSSHUsername
I0630 14:29:36.025677 1474461 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/functional-125151/id_rsa Username:docker}
I0630 14:29:36.121003 1474461 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:29:36.171368 1474461 main.go:141] libmachine: Making call to close driver server
I0630 14:29:36.171393 1474461 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:36.171755 1474461 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:36.171772 1474461 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:36.171781 1474461 main.go:141] libmachine: Making call to close driver server
I0630 14:29:36.171789 1474461 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:36.171789 1474461 main.go:141] libmachine: (functional-125151) DBG | Closing plugin on server side
I0630 14:29:36.172105 1474461 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:36.172124 1474461 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-125151 image ls --format json --alsologtostderr:
[{"id":"sha256:36cc989fdf6cd036da565b5dd5bd7e8bc022d14592408dd5f9886fd1f3e5dc7e","repoDigests":[],"repoTags":["localhost/my-image:functional-125151"],"size":"774887"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":["registry.k8s.io/etcd@sha256:d58c035df557080a27387d
687092e3fc2b64c6d0e3162dc51453a115f847d121"],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"58938593"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-125151"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b","repoDigests":["registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.0"],"size":"20939036"},{"id":"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19","repoDigests":["registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01
827eb65e51"],"repoTags":["registry.k8s.io/kube-proxy:v1.33.2"],"size":"31891765"},{"id":"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.33.2"],"size":"21782634"},{"id":"sha256:55432035f1d2822b25cfbe10ff7e0eed2b00c44333501fda6e7fe60042eb61ad","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-125151"],"size":"992"},{"id":"sha256:9a9a9fd723f1d2ba52b914ece050f298eec04ef490a9065c52805e46779c4c43","repoDigests":["docker.io/library/nginx@sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1"],"repoTags":["docker.io/library/nginx:latest"],"size":"72225606"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s
-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137"],"repoTags":["registry.k8s.io/kube-apiserver:v1.33.2"],"size":"30075899"},{"id":"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.33.2"],"size":"27646507"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{
"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-125151 image ls --format json --alsologtostderr:
I0630 14:29:35.734417 1474426 out.go:345] Setting OutFile to fd 1 ...
I0630 14:29:35.734546 1474426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:35.734557 1474426 out.go:358] Setting ErrFile to fd 2...
I0630 14:29:35.734562 1474426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:35.734764 1474426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
I0630 14:29:35.735481 1474426 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:35.735594 1474426 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:35.735941 1474426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:35.736010 1474426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:35.752135 1474426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37789
I0630 14:29:35.752748 1474426 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:35.753450 1474426 main.go:141] libmachine: Using API Version  1
I0630 14:29:35.753472 1474426 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:35.753909 1474426 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:35.754214 1474426 main.go:141] libmachine: (functional-125151) Calling .GetState
I0630 14:29:35.756423 1474426 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:35.756468 1474426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:35.772211 1474426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
I0630 14:29:35.772677 1474426 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:35.773219 1474426 main.go:141] libmachine: Using API Version  1
I0630 14:29:35.773248 1474426 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:35.773676 1474426 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:35.773946 1474426 main.go:141] libmachine: (functional-125151) Calling .DriverName
I0630 14:29:35.774229 1474426 ssh_runner.go:195] Run: systemctl --version
I0630 14:29:35.774263 1474426 main.go:141] libmachine: (functional-125151) Calling .GetSSHHostname
I0630 14:29:35.777968 1474426 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:35.778447 1474426 main.go:141] libmachine: (functional-125151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:6b", ip: ""} in network mk-functional-125151: {Iface:virbr1 ExpiryTime:2025-06-30 15:26:09 +0000 UTC Type:0 Mac:52:54:00:78:c3:6b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:functional-125151 Clientid:01:52:54:00:78:c3:6b}
I0630 14:29:35.778490 1474426 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:35.778722 1474426 main.go:141] libmachine: (functional-125151) Calling .GetSSHPort
I0630 14:29:35.778977 1474426 main.go:141] libmachine: (functional-125151) Calling .GetSSHKeyPath
I0630 14:29:35.779158 1474426 main.go:141] libmachine: (functional-125151) Calling .GetSSHUsername
I0630 14:29:35.779327 1474426 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/functional-125151/id_rsa Username:docker}
I0630 14:29:35.862284 1474426 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:29:35.910253 1474426 main.go:141] libmachine: Making call to close driver server
I0630 14:29:35.910270 1474426 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:35.910605 1474426 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:35.910637 1474426 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:35.910647 1474426 main.go:141] libmachine: Making call to close driver server
I0630 14:29:35.910655 1474426 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:35.910994 1474426 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:35.911026 1474426 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:35.910986 1474426 main.go:141] libmachine: (functional-125151) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-125151 image ls --format yaml --alsologtostderr:
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-125151
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests:
- registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "58938593"
- id: sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3
repoTags:
- registry.k8s.io/kube-scheduler:v1.33.2
size: "21782634"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137
repoTags:
- registry.k8s.io/kube-apiserver:v1.33.2
size: "30075899"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.0
size: "20939036"
- id: sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081
repoTags:
- registry.k8s.io/kube-controller-manager:v1.33.2
size: "27646507"
- id: sha256:55432035f1d2822b25cfbe10ff7e0eed2b00c44333501fda6e7fe60042eb61ad
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-125151
size: "992"
- id: sha256:9a9a9fd723f1d2ba52b914ece050f298eec04ef490a9065c52805e46779c4c43
repoDigests:
- docker.io/library/nginx@sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1
repoTags:
- docker.io/library/nginx:latest
size: "72225606"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51
repoTags:
- registry.k8s.io/kube-proxy:v1.33.2
size: "31891765"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-125151 image ls --format yaml --alsologtostderr:
I0630 14:29:32.083941 1474323 out.go:345] Setting OutFile to fd 1 ...
I0630 14:29:32.084216 1474323 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:32.084225 1474323 out.go:358] Setting ErrFile to fd 2...
I0630 14:29:32.084228 1474323 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:32.084486 1474323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
I0630 14:29:32.085285 1474323 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:32.085432 1474323 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:32.085944 1474323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:32.086014 1474323 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:32.102835 1474323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
I0630 14:29:32.104437 1474323 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:32.105017 1474323 main.go:141] libmachine: Using API Version  1
I0630 14:29:32.105045 1474323 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:32.105458 1474323 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:32.105721 1474323 main.go:141] libmachine: (functional-125151) Calling .GetState
I0630 14:29:32.107834 1474323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:32.107884 1474323 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:32.123736 1474323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
I0630 14:29:32.124258 1474323 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:32.124809 1474323 main.go:141] libmachine: Using API Version  1
I0630 14:29:32.124858 1474323 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:32.125226 1474323 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:32.125417 1474323 main.go:141] libmachine: (functional-125151) Calling .DriverName
I0630 14:29:32.125645 1474323 ssh_runner.go:195] Run: systemctl --version
I0630 14:29:32.125682 1474323 main.go:141] libmachine: (functional-125151) Calling .GetSSHHostname
I0630 14:29:32.129718 1474323 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:32.130161 1474323 main.go:141] libmachine: (functional-125151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:6b", ip: ""} in network mk-functional-125151: {Iface:virbr1 ExpiryTime:2025-06-30 15:26:09 +0000 UTC Type:0 Mac:52:54:00:78:c3:6b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:functional-125151 Clientid:01:52:54:00:78:c3:6b}
I0630 14:29:32.130187 1474323 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:32.130329 1474323 main.go:141] libmachine: (functional-125151) Calling .GetSSHPort
I0630 14:29:32.130573 1474323 main.go:141] libmachine: (functional-125151) Calling .GetSSHKeyPath
I0630 14:29:32.130742 1474323 main.go:141] libmachine: (functional-125151) Calling .GetSSHUsername
I0630 14:29:32.130887 1474323 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/functional-125151/id_rsa Username:docker}
I0630 14:29:32.226495 1474323 ssh_runner.go:195] Run: sudo crictl images --output json
I0630 14:29:32.298511 1474323 main.go:141] libmachine: Making call to close driver server
I0630 14:29:32.298527 1474323 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:32.298955 1474323 main.go:141] libmachine: (functional-125151) DBG | Closing plugin on server side
I0630 14:29:32.298949 1474323 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:32.298981 1474323 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:32.298991 1474323 main.go:141] libmachine: Making call to close driver server
I0630 14:29:32.298996 1474323 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:32.299229 1474323 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:32.299246 1474323 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh pgrep buildkitd: exit status 1 (202.210346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image build -t localhost/my-image:functional-125151 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 image build -t localhost/my-image:functional-125151 testdata/build --alsologtostderr: (2.944180231s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-125151 image build -t localhost/my-image:functional-125151 testdata/build --alsologtostderr:
I0630 14:29:32.558784 1474377 out.go:345] Setting OutFile to fd 1 ...
I0630 14:29:32.559028 1474377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:32.559037 1474377 out.go:358] Setting ErrFile to fd 2...
I0630 14:29:32.559041 1474377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0630 14:29:32.559234 1474377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
I0630 14:29:32.559871 1474377 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:32.560433 1474377 config.go:182] Loaded profile config "functional-125151": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
I0630 14:29:32.560798 1474377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:32.560842 1474377 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:32.577452 1474377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
I0630 14:29:32.578008 1474377 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:32.578582 1474377 main.go:141] libmachine: Using API Version  1
I0630 14:29:32.578608 1474377 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:32.579014 1474377 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:32.579215 1474377 main.go:141] libmachine: (functional-125151) Calling .GetState
I0630 14:29:32.581203 1474377 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
I0630 14:29:32.581256 1474377 main.go:141] libmachine: Launching plugin server for driver kvm2
I0630 14:29:32.597810 1474377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
I0630 14:29:32.598303 1474377 main.go:141] libmachine: () Calling .GetVersion
I0630 14:29:32.598882 1474377 main.go:141] libmachine: Using API Version  1
I0630 14:29:32.598911 1474377 main.go:141] libmachine: () Calling .SetConfigRaw
I0630 14:29:32.599276 1474377 main.go:141] libmachine: () Calling .GetMachineName
I0630 14:29:32.599551 1474377 main.go:141] libmachine: (functional-125151) Calling .DriverName
I0630 14:29:32.599867 1474377 ssh_runner.go:195] Run: systemctl --version
I0630 14:29:32.599902 1474377 main.go:141] libmachine: (functional-125151) Calling .GetSSHHostname
I0630 14:29:32.603220 1474377 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:32.603883 1474377 main.go:141] libmachine: (functional-125151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:6b", ip: ""} in network mk-functional-125151: {Iface:virbr1 ExpiryTime:2025-06-30 15:26:09 +0000 UTC Type:0 Mac:52:54:00:78:c3:6b Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:functional-125151 Clientid:01:52:54:00:78:c3:6b}
I0630 14:29:32.603910 1474377 main.go:141] libmachine: (functional-125151) DBG | domain functional-125151 has defined IP address 192.168.39.24 and MAC address 52:54:00:78:c3:6b in network mk-functional-125151
I0630 14:29:32.604230 1474377 main.go:141] libmachine: (functional-125151) Calling .GetSSHPort
I0630 14:29:32.604494 1474377 main.go:141] libmachine: (functional-125151) Calling .GetSSHKeyPath
I0630 14:29:32.604733 1474377 main.go:141] libmachine: (functional-125151) Calling .GetSSHUsername
I0630 14:29:32.604916 1474377 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/functional-125151/id_rsa Username:docker}
I0630 14:29:32.688414 1474377 build_images.go:161] Building image from path: /tmp/build.2657241141.tar
I0630 14:29:32.688503 1474377 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0630 14:29:32.701793 1474377 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2657241141.tar
I0630 14:29:32.707324 1474377 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2657241141.tar: stat -c "%s %y" /var/lib/minikube/build/build.2657241141.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2657241141.tar': No such file or directory
I0630 14:29:32.707377 1474377 ssh_runner.go:362] scp /tmp/build.2657241141.tar --> /var/lib/minikube/build/build.2657241141.tar (3072 bytes)
I0630 14:29:32.740494 1474377 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2657241141
I0630 14:29:32.753839 1474377 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2657241141 -xf /var/lib/minikube/build/build.2657241141.tar
I0630 14:29:32.766621 1474377 containerd.go:394] Building image: /var/lib/minikube/build/build.2657241141
I0630 14:29:32.766713 1474377 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2657241141 --local dockerfile=/var/lib/minikube/build/build.2657241141 --output type=image,name=localhost/my-image:functional-125151
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:b028da907f46f477ae0162884f1f734960d11802a4d5bc4ebde05e52e25431bc
#8 exporting manifest sha256:b028da907f46f477ae0162884f1f734960d11802a4d5bc4ebde05e52e25431bc 0.0s done
#8 exporting config sha256:36cc989fdf6cd036da565b5dd5bd7e8bc022d14592408dd5f9886fd1f3e5dc7e 0.0s done
#8 naming to localhost/my-image:functional-125151 done
#8 DONE 0.2s
I0630 14:29:35.420302 1474377 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2657241141 --local dockerfile=/var/lib/minikube/build/build.2657241141 --output type=image,name=localhost/my-image:functional-125151: (2.653555839s)
I0630 14:29:35.420401 1474377 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2657241141
I0630 14:29:35.434361 1474377 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2657241141.tar
I0630 14:29:35.447154 1474377 build_images.go:217] Built localhost/my-image:functional-125151 from /tmp/build.2657241141.tar
I0630 14:29:35.447199 1474377 build_images.go:133] succeeded building to: functional-125151
I0630 14:29:35.447208 1474377 build_images.go:134] failed building to: 
I0630 14:29:35.447244 1474377 main.go:141] libmachine: Making call to close driver server
I0630 14:29:35.447259 1474377 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:35.447661 1474377 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:35.447686 1474377 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:35.447697 1474377 main.go:141] libmachine: Making call to close driver server
I0630 14:29:35.447706 1474377 main.go:141] libmachine: (functional-125151) Calling .Close
I0630 14:29:35.447976 1474377 main.go:141] libmachine: Successfully made call to close driver server
I0630 14:29:35.447995 1474377 main.go:141] libmachine: Making call to close connection to plugin binary
I0630 14:29:35.448019 1474377 main.go:141] libmachine: (functional-125151) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-125151
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image load --daemon kicbase/echo-server:functional-125151 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 image load --daemon kicbase/echo-server:functional-125151 --alsologtostderr: (1.264564897s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image load --daemon kicbase/echo-server:functional-125151 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-125151
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image load --daemon kicbase/echo-server:functional-125151 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-125151 image load --daemon kicbase/echo-server:functional-125151 --alsologtostderr: (1.023239409s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image save kicbase/echo-server:functional-125151 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image rm kicbase/echo-server:functional-125151 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-125151
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 image save --daemon kicbase/echo-server:functional-125151 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-125151
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 service list -o json
functional_test.go:1511: Took "967.692558ms" to run "out/minikube-linux-amd64 -p functional-125151 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.24:31319
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 update-context --alsologtostderr -v=2
E0630 14:29:46.163956 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:31:08.086239 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:33:24.221075 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:33:51.928589 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.24:31319
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "348.532208ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "71.961708ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "365.812633ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "52.154484ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdany-port412293609/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1751293760194277168" to /tmp/TestFunctionalparallelMountCmdany-port412293609/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1751293760194277168" to /tmp/TestFunctionalparallelMountCmdany-port412293609/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1751293760194277168" to /tmp/TestFunctionalparallelMountCmdany-port412293609/001/test-1751293760194277168
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.057315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:29:20.470611 1459494 retry.go:31] will retry after 474.303945ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 30 14:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 30 14:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 30 14:29 test-1751293760194277168
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh cat /mount-9p/test-1751293760194277168
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-125151 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e67cb610-c5ce-4fc0-a4a8-51efa9bdf6f4] Pending
helpers_test.go:344: "busybox-mount" [e67cb610-c5ce-4fc0-a4a8-51efa9bdf6f4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e67cb610-c5ce-4fc0-a4a8-51efa9bdf6f4] Running
helpers_test.go:344: "busybox-mount" [e67cb610-c5ce-4fc0-a4a8-51efa9bdf6f4] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e67cb610-c5ce-4fc0-a4a8-51efa9bdf6f4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004112634s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-125151 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdany-port412293609/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdspecific-port216868413/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (236.543974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:29:28.003425 1459494 retry.go:31] will retry after 678.444214ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdspecific-port216868413/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh "sudo umount -f /mount-9p": exit status 1 (208.488862ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-125151 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdspecific-port216868413/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T" /mount1: exit status 1 (220.338331ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0630 14:29:29.944578 1459494 retry.go:31] will retry after 663.605294ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-125151 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-125151 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-125151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3918773941/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-125151
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-125151
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-125151
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (217.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E0630 14:38:24.221649 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (3m36.555039128s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (217.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 kubectl -- rollout status deployment/busybox: (3.030486911s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
E0630 14:39:05.494250 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
E0630 14:39:05.500570 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:39:05.511969 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:39:05.533514 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:39:05.574956 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-mjpmx -- nslookup kubernetes.io
E0630 14:39:05.656723 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:39:05.818953 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-qfglt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-xsb8k -- nslookup kubernetes.io
E0630 14:39:06.140548 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-mjpmx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-qfglt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-xsb8k -- nslookup kubernetes.default
E0630 14:39:06.782275 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-mjpmx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-qfglt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-xsb8k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-mjpmx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-mjpmx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-qfglt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0630 14:39:08.065241 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-qfglt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-xsb8k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 kubectl -- exec busybox-58667487b6-xsb8k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 node add --alsologtostderr -v 5
E0630 14:39:10.626698 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:39:15.748907 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:39:25.990255 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:39:46.472296 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 node add --alsologtostderr -v 5: (48.945702353s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-480422 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp testdata/cp-test.txt ha-480422:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2671880909/001/cp-test_ha-480422.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422:/home/docker/cp-test.txt ha-480422-m02:/home/docker/cp-test_ha-480422_ha-480422-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test_ha-480422_ha-480422-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422:/home/docker/cp-test.txt ha-480422-m03:/home/docker/cp-test_ha-480422_ha-480422-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test_ha-480422_ha-480422-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422:/home/docker/cp-test.txt ha-480422-m04:/home/docker/cp-test_ha-480422_ha-480422-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test_ha-480422_ha-480422-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp testdata/cp-test.txt ha-480422-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2671880909/001/cp-test_ha-480422-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m02:/home/docker/cp-test.txt ha-480422:/home/docker/cp-test_ha-480422-m02_ha-480422.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test_ha-480422-m02_ha-480422.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m02:/home/docker/cp-test.txt ha-480422-m03:/home/docker/cp-test_ha-480422-m02_ha-480422-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test_ha-480422-m02_ha-480422-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m02:/home/docker/cp-test.txt ha-480422-m04:/home/docker/cp-test_ha-480422-m02_ha-480422-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test_ha-480422-m02_ha-480422-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp testdata/cp-test.txt ha-480422-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2671880909/001/cp-test_ha-480422-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m03:/home/docker/cp-test.txt ha-480422:/home/docker/cp-test_ha-480422-m03_ha-480422.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test_ha-480422-m03_ha-480422.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m03:/home/docker/cp-test.txt ha-480422-m02:/home/docker/cp-test_ha-480422-m03_ha-480422-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test_ha-480422-m03_ha-480422-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m03:/home/docker/cp-test.txt ha-480422-m04:/home/docker/cp-test_ha-480422-m03_ha-480422-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test_ha-480422-m03_ha-480422-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp testdata/cp-test.txt ha-480422-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2671880909/001/cp-test_ha-480422-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m04:/home/docker/cp-test.txt ha-480422:/home/docker/cp-test_ha-480422-m04_ha-480422.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422 "sudo cat /home/docker/cp-test_ha-480422-m04_ha-480422.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m04:/home/docker/cp-test.txt ha-480422-m02:/home/docker/cp-test_ha-480422-m04_ha-480422-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m02 "sudo cat /home/docker/cp-test_ha-480422-m04_ha-480422-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 cp ha-480422-m04:/home/docker/cp-test.txt ha-480422-m03:/home/docker/cp-test_ha-480422-m04_ha-480422-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 ssh -n ha-480422-m03 "sudo cat /home/docker/cp-test_ha-480422-m04_ha-480422-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 node stop m02 --alsologtostderr -v 5
E0630 14:40:27.434495 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 node stop m02 --alsologtostderr -v 5: (1m31.030704458s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5: exit status 7 (705.211409ms)

                                                
                                                
-- stdout --
	ha-480422
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480422-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480422-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480422-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 14:41:45.201281 1480433 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:41:45.201558 1480433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:45.201568 1480433 out.go:358] Setting ErrFile to fd 2...
	I0630 14:41:45.201575 1480433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:41:45.201797 1480433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:41:45.202002 1480433 out.go:352] Setting JSON to false
	I0630 14:41:45.202047 1480433 mustload.go:65] Loading cluster: ha-480422
	I0630 14:41:45.202152 1480433 notify.go:220] Checking for updates...
	I0630 14:41:45.202482 1480433 config.go:182] Loaded profile config "ha-480422": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:41:45.202510 1480433 status.go:174] checking status of ha-480422 ...
	I0630 14:41:45.203118 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.203194 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.224542 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43551
	I0630 14:41:45.225175 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.225779 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.225804 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.226382 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.226610 1480433 main.go:141] libmachine: (ha-480422) Calling .GetState
	I0630 14:41:45.228461 1480433 status.go:371] ha-480422 host status = "Running" (err=<nil>)
	I0630 14:41:45.228480 1480433 host.go:66] Checking if "ha-480422" exists ...
	I0630 14:41:45.228868 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.228916 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.244606 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0630 14:41:45.245099 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.245627 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.245653 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.246116 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.246367 1480433 main.go:141] libmachine: (ha-480422) Calling .GetIP
	I0630 14:41:45.249320 1480433 main.go:141] libmachine: (ha-480422) DBG | domain ha-480422 has defined MAC address 52:54:00:79:71:0c in network mk-ha-480422
	I0630 14:41:45.249788 1480433 main.go:141] libmachine: (ha-480422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:71:0c", ip: ""} in network mk-ha-480422: {Iface:virbr1 ExpiryTime:2025-06-30 15:35:40 +0000 UTC Type:0 Mac:52:54:00:79:71:0c Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-480422 Clientid:01:52:54:00:79:71:0c}
	I0630 14:41:45.249817 1480433 main.go:141] libmachine: (ha-480422) DBG | domain ha-480422 has defined IP address 192.168.39.192 and MAC address 52:54:00:79:71:0c in network mk-ha-480422
	I0630 14:41:45.249932 1480433 host.go:66] Checking if "ha-480422" exists ...
	I0630 14:41:45.250241 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.250285 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.265664 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0630 14:41:45.266137 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.266731 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.266764 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.267121 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.267355 1480433 main.go:141] libmachine: (ha-480422) Calling .DriverName
	I0630 14:41:45.267538 1480433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 14:41:45.267563 1480433 main.go:141] libmachine: (ha-480422) Calling .GetSSHHostname
	I0630 14:41:45.270813 1480433 main.go:141] libmachine: (ha-480422) DBG | domain ha-480422 has defined MAC address 52:54:00:79:71:0c in network mk-ha-480422
	I0630 14:41:45.271361 1480433 main.go:141] libmachine: (ha-480422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:71:0c", ip: ""} in network mk-ha-480422: {Iface:virbr1 ExpiryTime:2025-06-30 15:35:40 +0000 UTC Type:0 Mac:52:54:00:79:71:0c Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-480422 Clientid:01:52:54:00:79:71:0c}
	I0630 14:41:45.271401 1480433 main.go:141] libmachine: (ha-480422) DBG | domain ha-480422 has defined IP address 192.168.39.192 and MAC address 52:54:00:79:71:0c in network mk-ha-480422
	I0630 14:41:45.271652 1480433 main.go:141] libmachine: (ha-480422) Calling .GetSSHPort
	I0630 14:41:45.271846 1480433 main.go:141] libmachine: (ha-480422) Calling .GetSSHKeyPath
	I0630 14:41:45.272022 1480433 main.go:141] libmachine: (ha-480422) Calling .GetSSHUsername
	I0630 14:41:45.272151 1480433 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/ha-480422/id_rsa Username:docker}
	I0630 14:41:45.367653 1480433 ssh_runner.go:195] Run: systemctl --version
	I0630 14:41:45.376346 1480433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:41:45.396227 1480433 kubeconfig.go:125] found "ha-480422" server: "https://192.168.39.254:8443"
	I0630 14:41:45.396274 1480433 api_server.go:166] Checking apiserver status ...
	I0630 14:41:45.396322 1480433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:41:45.416597 1480433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1442/cgroup
	W0630 14:41:45.432495 1480433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1442/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0630 14:41:45.432563 1480433 ssh_runner.go:195] Run: ls
	I0630 14:41:45.440429 1480433 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0630 14:41:45.445833 1480433 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0630 14:41:45.445874 1480433 status.go:463] ha-480422 apiserver status = Running (err=<nil>)
	I0630 14:41:45.445891 1480433 status.go:176] ha-480422 status: &{Name:ha-480422 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:41:45.445928 1480433 status.go:174] checking status of ha-480422-m02 ...
	I0630 14:41:45.446292 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.446353 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.462205 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I0630 14:41:45.462816 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.463484 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.463510 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.463912 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.464138 1480433 main.go:141] libmachine: (ha-480422-m02) Calling .GetState
	I0630 14:41:45.465915 1480433 status.go:371] ha-480422-m02 host status = "Stopped" (err=<nil>)
	I0630 14:41:45.465931 1480433 status.go:384] host is not running, skipping remaining checks
	I0630 14:41:45.465938 1480433 status.go:176] ha-480422-m02 status: &{Name:ha-480422-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:41:45.465962 1480433 status.go:174] checking status of ha-480422-m03 ...
	I0630 14:41:45.466260 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.466346 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.484899 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0630 14:41:45.485488 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.486057 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.486084 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.486422 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.486616 1480433 main.go:141] libmachine: (ha-480422-m03) Calling .GetState
	I0630 14:41:45.488292 1480433 status.go:371] ha-480422-m03 host status = "Running" (err=<nil>)
	I0630 14:41:45.488311 1480433 host.go:66] Checking if "ha-480422-m03" exists ...
	I0630 14:41:45.488597 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.488645 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.504235 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42345
	I0630 14:41:45.504715 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.505309 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.505333 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.505673 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.505911 1480433 main.go:141] libmachine: (ha-480422-m03) Calling .GetIP
	I0630 14:41:45.509257 1480433 main.go:141] libmachine: (ha-480422-m03) DBG | domain ha-480422-m03 has defined MAC address 52:54:00:f9:54:9d in network mk-ha-480422
	I0630 14:41:45.509780 1480433 main.go:141] libmachine: (ha-480422-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:54:9d", ip: ""} in network mk-ha-480422: {Iface:virbr1 ExpiryTime:2025-06-30 15:37:55 +0000 UTC Type:0 Mac:52:54:00:f9:54:9d Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-480422-m03 Clientid:01:52:54:00:f9:54:9d}
	I0630 14:41:45.509811 1480433 main.go:141] libmachine: (ha-480422-m03) DBG | domain ha-480422-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:54:9d in network mk-ha-480422
	I0630 14:41:45.510035 1480433 host.go:66] Checking if "ha-480422-m03" exists ...
	I0630 14:41:45.510349 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.510398 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.527463 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39645
	I0630 14:41:45.528098 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.528588 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.528610 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.528934 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.529179 1480433 main.go:141] libmachine: (ha-480422-m03) Calling .DriverName
	I0630 14:41:45.529411 1480433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 14:41:45.529446 1480433 main.go:141] libmachine: (ha-480422-m03) Calling .GetSSHHostname
	I0630 14:41:45.532409 1480433 main.go:141] libmachine: (ha-480422-m03) DBG | domain ha-480422-m03 has defined MAC address 52:54:00:f9:54:9d in network mk-ha-480422
	I0630 14:41:45.532902 1480433 main.go:141] libmachine: (ha-480422-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:54:9d", ip: ""} in network mk-ha-480422: {Iface:virbr1 ExpiryTime:2025-06-30 15:37:55 +0000 UTC Type:0 Mac:52:54:00:f9:54:9d Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-480422-m03 Clientid:01:52:54:00:f9:54:9d}
	I0630 14:41:45.532937 1480433 main.go:141] libmachine: (ha-480422-m03) DBG | domain ha-480422-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:54:9d in network mk-ha-480422
	I0630 14:41:45.533050 1480433 main.go:141] libmachine: (ha-480422-m03) Calling .GetSSHPort
	I0630 14:41:45.533249 1480433 main.go:141] libmachine: (ha-480422-m03) Calling .GetSSHKeyPath
	I0630 14:41:45.533411 1480433 main.go:141] libmachine: (ha-480422-m03) Calling .GetSSHUsername
	I0630 14:41:45.533547 1480433 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/ha-480422-m03/id_rsa Username:docker}
	I0630 14:41:45.618673 1480433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:41:45.638575 1480433 kubeconfig.go:125] found "ha-480422" server: "https://192.168.39.254:8443"
	I0630 14:41:45.638607 1480433 api_server.go:166] Checking apiserver status ...
	I0630 14:41:45.638647 1480433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 14:41:45.658929 1480433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1715/cgroup
	W0630 14:41:45.670709 1480433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1715/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0630 14:41:45.670788 1480433 ssh_runner.go:195] Run: ls
	I0630 14:41:45.676509 1480433 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0630 14:41:45.682092 1480433 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0630 14:41:45.682150 1480433 status.go:463] ha-480422-m03 apiserver status = Running (err=<nil>)
	I0630 14:41:45.682164 1480433 status.go:176] ha-480422-m03 status: &{Name:ha-480422-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:41:45.682193 1480433 status.go:174] checking status of ha-480422-m04 ...
	I0630 14:41:45.682672 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.682734 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.699527 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0630 14:41:45.700015 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.700428 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.700450 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.700827 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.701050 1480433 main.go:141] libmachine: (ha-480422-m04) Calling .GetState
	I0630 14:41:45.702787 1480433 status.go:371] ha-480422-m04 host status = "Running" (err=<nil>)
	I0630 14:41:45.702805 1480433 host.go:66] Checking if "ha-480422-m04" exists ...
	I0630 14:41:45.703124 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.703167 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.719582 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I0630 14:41:45.720117 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.720655 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.720678 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.721057 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.721314 1480433 main.go:141] libmachine: (ha-480422-m04) Calling .GetIP
	I0630 14:41:45.724135 1480433 main.go:141] libmachine: (ha-480422-m04) DBG | domain ha-480422-m04 has defined MAC address 52:54:00:97:d6:5b in network mk-ha-480422
	I0630 14:41:45.724726 1480433 main.go:141] libmachine: (ha-480422-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:d6:5b", ip: ""} in network mk-ha-480422: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:26 +0000 UTC Type:0 Mac:52:54:00:97:d6:5b Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-480422-m04 Clientid:01:52:54:00:97:d6:5b}
	I0630 14:41:45.724765 1480433 main.go:141] libmachine: (ha-480422-m04) DBG | domain ha-480422-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:97:d6:5b in network mk-ha-480422
	I0630 14:41:45.725112 1480433 host.go:66] Checking if "ha-480422-m04" exists ...
	I0630 14:41:45.725461 1480433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:41:45.725513 1480433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:41:45.741429 1480433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I0630 14:41:45.742002 1480433 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:41:45.742575 1480433 main.go:141] libmachine: Using API Version  1
	I0630 14:41:45.742604 1480433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:41:45.743039 1480433 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:41:45.743283 1480433 main.go:141] libmachine: (ha-480422-m04) Calling .DriverName
	I0630 14:41:45.743524 1480433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 14:41:45.743565 1480433 main.go:141] libmachine: (ha-480422-m04) Calling .GetSSHHostname
	I0630 14:41:45.747078 1480433 main.go:141] libmachine: (ha-480422-m04) DBG | domain ha-480422-m04 has defined MAC address 52:54:00:97:d6:5b in network mk-ha-480422
	I0630 14:41:45.747579 1480433 main.go:141] libmachine: (ha-480422-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:d6:5b", ip: ""} in network mk-ha-480422: {Iface:virbr1 ExpiryTime:2025-06-30 15:39:26 +0000 UTC Type:0 Mac:52:54:00:97:d6:5b Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-480422-m04 Clientid:01:52:54:00:97:d6:5b}
	I0630 14:41:45.747614 1480433 main.go:141] libmachine: (ha-480422-m04) DBG | domain ha-480422-m04 has defined IP address 192.168.39.60 and MAC address 52:54:00:97:d6:5b in network mk-ha-480422
	I0630 14:41:45.747801 1480433 main.go:141] libmachine: (ha-480422-m04) Calling .GetSSHPort
	I0630 14:41:45.748001 1480433 main.go:141] libmachine: (ha-480422-m04) Calling .GetSSHKeyPath
	I0630 14:41:45.748179 1480433 main.go:141] libmachine: (ha-480422-m04) Calling .GetSSHUsername
	I0630 14:41:45.748293 1480433 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/ha-480422-m04/id_rsa Username:docker}
	I0630 14:41:45.834800 1480433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 14:41:45.853224 1480433 status.go:176] ha-480422-m04 status: &{Name:ha-480422-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 node start m02 --alsologtostderr -v 5
E0630 14:41:49.356356 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 node start m02 --alsologtostderr -v 5: (25.235818128s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5: (1.206904008s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.025895084s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (405.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 stop --alsologtostderr -v 5
E0630 14:43:24.221709 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:44:05.493813 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:44:33.198503 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:44:47.289987 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 stop --alsologtostderr -v 5: (4m34.520324009s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 start --wait true --alsologtostderr -v 5
E0630 14:48:24.221951 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 start --wait true --alsologtostderr -v 5: (2m10.441264904s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (405.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 node delete m03 --alsologtostderr -v 5
E0630 14:49:05.494673 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 node delete m03 --alsologtostderr -v 5: (6.465059676s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 stop --alsologtostderr -v 5
E0630 14:53:24.221850 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 stop --alsologtostderr -v 5: (4m32.793673435s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5: exit status 7 (121.187986ms)

                                                
                                                
-- stdout --
	ha-480422
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480422-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480422-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 14:53:40.054066 1484117 out.go:345] Setting OutFile to fd 1 ...
	I0630 14:53:40.054369 1484117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:53:40.054379 1484117 out.go:358] Setting ErrFile to fd 2...
	I0630 14:53:40.054384 1484117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 14:53:40.054587 1484117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 14:53:40.054786 1484117 out.go:352] Setting JSON to false
	I0630 14:53:40.054832 1484117 mustload.go:65] Loading cluster: ha-480422
	I0630 14:53:40.054916 1484117 notify.go:220] Checking for updates...
	I0630 14:53:40.055293 1484117 config.go:182] Loaded profile config "ha-480422": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 14:53:40.055316 1484117 status.go:174] checking status of ha-480422 ...
	I0630 14:53:40.055772 1484117 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:53:40.055817 1484117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:53:40.073933 1484117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I0630 14:53:40.074434 1484117 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:53:40.075022 1484117 main.go:141] libmachine: Using API Version  1
	I0630 14:53:40.075053 1484117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:53:40.075467 1484117 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:53:40.075727 1484117 main.go:141] libmachine: (ha-480422) Calling .GetState
	I0630 14:53:40.078484 1484117 status.go:371] ha-480422 host status = "Stopped" (err=<nil>)
	I0630 14:53:40.078509 1484117 status.go:384] host is not running, skipping remaining checks
	I0630 14:53:40.078518 1484117 status.go:176] ha-480422 status: &{Name:ha-480422 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:53:40.078557 1484117 status.go:174] checking status of ha-480422-m02 ...
	I0630 14:53:40.078890 1484117 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:53:40.078936 1484117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:53:40.095120 1484117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45231
	I0630 14:53:40.095648 1484117 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:53:40.096117 1484117 main.go:141] libmachine: Using API Version  1
	I0630 14:53:40.096139 1484117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:53:40.096500 1484117 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:53:40.096760 1484117 main.go:141] libmachine: (ha-480422-m02) Calling .GetState
	I0630 14:53:40.098473 1484117 status.go:371] ha-480422-m02 host status = "Stopped" (err=<nil>)
	I0630 14:53:40.098489 1484117 status.go:384] host is not running, skipping remaining checks
	I0630 14:53:40.098495 1484117 status.go:176] ha-480422-m02 status: &{Name:ha-480422-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 14:53:40.098521 1484117 status.go:174] checking status of ha-480422-m04 ...
	I0630 14:53:40.098964 1484117 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 14:53:40.099019 1484117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 14:53:40.116238 1484117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I0630 14:53:40.116714 1484117 main.go:141] libmachine: () Calling .GetVersion
	I0630 14:53:40.117322 1484117 main.go:141] libmachine: Using API Version  1
	I0630 14:53:40.117356 1484117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 14:53:40.117743 1484117 main.go:141] libmachine: () Calling .GetMachineName
	I0630 14:53:40.117967 1484117 main.go:141] libmachine: (ha-480422-m04) Calling .GetState
	I0630 14:53:40.119535 1484117 status.go:371] ha-480422-m04 host status = "Stopped" (err=<nil>)
	I0630 14:53:40.119555 1484117 status.go:384] host is not running, skipping remaining checks
	I0630 14:53:40.119563 1484117 status.go:176] ha-480422-m04 status: &{Name:ha-480422-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (123.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E0630 14:54:05.494197 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 14:55:28.560940 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (2m2.241424386s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (123.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-480422 node add --control-plane --alsologtostderr -v 5: (1m18.342487672s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-480422 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.003333399s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (57.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-705839 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-705839 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (57.935171957s)
--- PASS: TestJSONOutput/start/Command (57.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-705839 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-705839 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-705839 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-705839 --output=json --user=testUser: (7.357221697s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-595338 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-595338 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.254593ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"493a76a4-7934-4d81-9262-4f249d25c3f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-595338] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b14c0173-5741-41ab-b942-218aefb219a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20991"}}
	{"specversion":"1.0","id":"d15bbb92-0145-4dfa-bb7b-de4133c4bdba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ca37cac1-519c-425b-94ba-571a3bf4b953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig"}}
	{"specversion":"1.0","id":"10516100-6185-47de-8c98-f6b037c39bef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube"}}
	{"specversion":"1.0","id":"1a31ebbd-7e87-40bb-aed2-853aecd389d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fa832382-cb0c-475f-8bda-b02b88a81d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"200be6ef-a454-43b1-a68e-afeb820bc424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-595338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-595338
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (96.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-947470 --driver=kvm2  --container-runtime=containerd
E0630 14:58:24.229401 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-947470 --driver=kvm2  --container-runtime=containerd: (47.501104789s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-962740 --driver=kvm2  --container-runtime=containerd
E0630 14:59:05.494613 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-962740 --driver=kvm2  --container-runtime=containerd: (46.155027596s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-947470
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-962740
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-962740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-962740
helpers_test.go:175: Cleaning up "first-947470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-947470
--- PASS: TestMinikubeProfile (96.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-031038 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-031038 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.094210867s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-031038 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-031038 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-052524 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-052524 --memory=3072 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.979520449s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052524 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052524 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-031038 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052524 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052524 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-052524
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-052524: (1.460108808s)
--- PASS: TestMountStart/serial/Stop (1.46s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-052524
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-052524: (22.773494676s)
--- PASS: TestMountStart/serial/RestartStopped (23.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052524 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052524 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232047 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0630 15:01:27.291680 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232047 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m53.667915105s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-232047 -- rollout status deployment/busybox: (2.767694601s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-jpphh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-xlrqt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-jpphh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-xlrqt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-jpphh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-xlrqt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-jpphh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-jpphh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-xlrqt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232047 -- exec busybox-58667487b6-xlrqt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-232047 -v=5 --alsologtostderr
E0630 15:03:24.221719 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:04:05.493774 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-232047 -v=5 --alsologtostderr: (50.238147297s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-232047 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp testdata/cp-test.txt multinode-232047:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2281906034/001/cp-test_multinode-232047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047:/home/docker/cp-test.txt multinode-232047-m02:/home/docker/cp-test_multinode-232047_multinode-232047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m02 "sudo cat /home/docker/cp-test_multinode-232047_multinode-232047-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047:/home/docker/cp-test.txt multinode-232047-m03:/home/docker/cp-test_multinode-232047_multinode-232047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m03 "sudo cat /home/docker/cp-test_multinode-232047_multinode-232047-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp testdata/cp-test.txt multinode-232047-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2281906034/001/cp-test_multinode-232047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047-m02:/home/docker/cp-test.txt multinode-232047:/home/docker/cp-test_multinode-232047-m02_multinode-232047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047 "sudo cat /home/docker/cp-test_multinode-232047-m02_multinode-232047.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047-m02:/home/docker/cp-test.txt multinode-232047-m03:/home/docker/cp-test_multinode-232047-m02_multinode-232047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m03 "sudo cat /home/docker/cp-test_multinode-232047-m02_multinode-232047-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp testdata/cp-test.txt multinode-232047-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2281906034/001/cp-test_multinode-232047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047-m03:/home/docker/cp-test.txt multinode-232047:/home/docker/cp-test_multinode-232047-m03_multinode-232047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047 "sudo cat /home/docker/cp-test_multinode-232047-m03_multinode-232047.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 cp multinode-232047-m03:/home/docker/cp-test.txt multinode-232047-m02:/home/docker/cp-test_multinode-232047-m03_multinode-232047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 ssh -n multinode-232047-m02 "sudo cat /home/docker/cp-test_multinode-232047-m03_multinode-232047-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-232047 node stop m03: (1.544787819s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232047 status: exit status 7 (450.306635ms)

                                                
                                                
-- stdout --
	multinode-232047
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-232047-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-232047-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr: exit status 7 (471.376771ms)

                                                
                                                
-- stdout --
	multinode-232047
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-232047-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-232047-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:04:18.181859 1491832 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:04:18.182155 1491832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:04:18.182172 1491832 out.go:358] Setting ErrFile to fd 2...
	I0630 15:04:18.182177 1491832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:04:18.182395 1491832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 15:04:18.182597 1491832 out.go:352] Setting JSON to false
	I0630 15:04:18.182646 1491832 mustload.go:65] Loading cluster: multinode-232047
	I0630 15:04:18.182891 1491832 notify.go:220] Checking for updates...
	I0630 15:04:18.183075 1491832 config.go:182] Loaded profile config "multinode-232047": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 15:04:18.183099 1491832 status.go:174] checking status of multinode-232047 ...
	I0630 15:04:18.183511 1491832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:04:18.183577 1491832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:04:18.205261 1491832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37767
	I0630 15:04:18.205908 1491832 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:04:18.206577 1491832 main.go:141] libmachine: Using API Version  1
	I0630 15:04:18.206603 1491832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:04:18.207189 1491832 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:04:18.207579 1491832 main.go:141] libmachine: (multinode-232047) Calling .GetState
	I0630 15:04:18.210080 1491832 status.go:371] multinode-232047 host status = "Running" (err=<nil>)
	I0630 15:04:18.210105 1491832 host.go:66] Checking if "multinode-232047" exists ...
	I0630 15:04:18.210508 1491832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:04:18.210568 1491832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:04:18.227952 1491832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0630 15:04:18.228598 1491832 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:04:18.229257 1491832 main.go:141] libmachine: Using API Version  1
	I0630 15:04:18.229304 1491832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:04:18.229789 1491832 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:04:18.229993 1491832 main.go:141] libmachine: (multinode-232047) Calling .GetIP
	I0630 15:04:18.233139 1491832 main.go:141] libmachine: (multinode-232047) DBG | domain multinode-232047 has defined MAC address 52:54:00:80:5b:6e in network mk-multinode-232047
	I0630 15:04:18.233616 1491832 main.go:141] libmachine: (multinode-232047) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:5b:6e", ip: ""} in network mk-multinode-232047: {Iface:virbr1 ExpiryTime:2025-06-30 16:01:33 +0000 UTC Type:0 Mac:52:54:00:80:5b:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-232047 Clientid:01:52:54:00:80:5b:6e}
	I0630 15:04:18.233654 1491832 main.go:141] libmachine: (multinode-232047) DBG | domain multinode-232047 has defined IP address 192.168.39.127 and MAC address 52:54:00:80:5b:6e in network mk-multinode-232047
	I0630 15:04:18.233786 1491832 host.go:66] Checking if "multinode-232047" exists ...
	I0630 15:04:18.234242 1491832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:04:18.234289 1491832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:04:18.250709 1491832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0630 15:04:18.251222 1491832 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:04:18.251727 1491832 main.go:141] libmachine: Using API Version  1
	I0630 15:04:18.251751 1491832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:04:18.252131 1491832 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:04:18.252343 1491832 main.go:141] libmachine: (multinode-232047) Calling .DriverName
	I0630 15:04:18.252541 1491832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 15:04:18.252565 1491832 main.go:141] libmachine: (multinode-232047) Calling .GetSSHHostname
	I0630 15:04:18.255350 1491832 main.go:141] libmachine: (multinode-232047) DBG | domain multinode-232047 has defined MAC address 52:54:00:80:5b:6e in network mk-multinode-232047
	I0630 15:04:18.255763 1491832 main.go:141] libmachine: (multinode-232047) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:5b:6e", ip: ""} in network mk-multinode-232047: {Iface:virbr1 ExpiryTime:2025-06-30 16:01:33 +0000 UTC Type:0 Mac:52:54:00:80:5b:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-232047 Clientid:01:52:54:00:80:5b:6e}
	I0630 15:04:18.255796 1491832 main.go:141] libmachine: (multinode-232047) DBG | domain multinode-232047 has defined IP address 192.168.39.127 and MAC address 52:54:00:80:5b:6e in network mk-multinode-232047
	I0630 15:04:18.256108 1491832 main.go:141] libmachine: (multinode-232047) Calling .GetSSHPort
	I0630 15:04:18.256343 1491832 main.go:141] libmachine: (multinode-232047) Calling .GetSSHKeyPath
	I0630 15:04:18.256634 1491832 main.go:141] libmachine: (multinode-232047) Calling .GetSSHUsername
	I0630 15:04:18.256838 1491832 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/multinode-232047/id_rsa Username:docker}
	I0630 15:04:18.341464 1491832 ssh_runner.go:195] Run: systemctl --version
	I0630 15:04:18.348246 1491832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:04:18.365608 1491832 kubeconfig.go:125] found "multinode-232047" server: "https://192.168.39.127:8443"
	I0630 15:04:18.365666 1491832 api_server.go:166] Checking apiserver status ...
	I0630 15:04:18.365713 1491832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0630 15:04:18.383590 1491832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0630 15:04:18.395387 1491832 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0630 15:04:18.395462 1491832 ssh_runner.go:195] Run: ls
	I0630 15:04:18.401284 1491832 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0630 15:04:18.406792 1491832 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I0630 15:04:18.406840 1491832 status.go:463] multinode-232047 apiserver status = Running (err=<nil>)
	I0630 15:04:18.406853 1491832 status.go:176] multinode-232047 status: &{Name:multinode-232047 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:04:18.406870 1491832 status.go:174] checking status of multinode-232047-m02 ...
	I0630 15:04:18.407190 1491832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:04:18.407231 1491832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:04:18.423616 1491832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44119
	I0630 15:04:18.424128 1491832 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:04:18.424688 1491832 main.go:141] libmachine: Using API Version  1
	I0630 15:04:18.424721 1491832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:04:18.425027 1491832 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:04:18.425245 1491832 main.go:141] libmachine: (multinode-232047-m02) Calling .GetState
	I0630 15:04:18.426751 1491832 status.go:371] multinode-232047-m02 host status = "Running" (err=<nil>)
	I0630 15:04:18.426771 1491832 host.go:66] Checking if "multinode-232047-m02" exists ...
	I0630 15:04:18.427110 1491832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:04:18.427155 1491832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:04:18.443216 1491832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0630 15:04:18.443687 1491832 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:04:18.444218 1491832 main.go:141] libmachine: Using API Version  1
	I0630 15:04:18.444258 1491832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:04:18.444685 1491832 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:04:18.444915 1491832 main.go:141] libmachine: (multinode-232047-m02) Calling .GetIP
	I0630 15:04:18.447898 1491832 main.go:141] libmachine: (multinode-232047-m02) DBG | domain multinode-232047-m02 has defined MAC address 52:54:00:1a:38:98 in network mk-multinode-232047
	I0630 15:04:18.448416 1491832 main.go:141] libmachine: (multinode-232047-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:38:98", ip: ""} in network mk-multinode-232047: {Iface:virbr1 ExpiryTime:2025-06-30 16:02:38 +0000 UTC Type:0 Mac:52:54:00:1a:38:98 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-232047-m02 Clientid:01:52:54:00:1a:38:98}
	I0630 15:04:18.448442 1491832 main.go:141] libmachine: (multinode-232047-m02) DBG | domain multinode-232047-m02 has defined IP address 192.168.39.50 and MAC address 52:54:00:1a:38:98 in network mk-multinode-232047
	I0630 15:04:18.448529 1491832 host.go:66] Checking if "multinode-232047-m02" exists ...
	I0630 15:04:18.448836 1491832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:04:18.448873 1491832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:04:18.465318 1491832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0630 15:04:18.465861 1491832 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:04:18.466329 1491832 main.go:141] libmachine: Using API Version  1
	I0630 15:04:18.466364 1491832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:04:18.466719 1491832 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:04:18.466982 1491832 main.go:141] libmachine: (multinode-232047-m02) Calling .DriverName
	I0630 15:04:18.467181 1491832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0630 15:04:18.467204 1491832 main.go:141] libmachine: (multinode-232047-m02) Calling .GetSSHHostname
	I0630 15:04:18.470004 1491832 main.go:141] libmachine: (multinode-232047-m02) DBG | domain multinode-232047-m02 has defined MAC address 52:54:00:1a:38:98 in network mk-multinode-232047
	I0630 15:04:18.470409 1491832 main.go:141] libmachine: (multinode-232047-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:38:98", ip: ""} in network mk-multinode-232047: {Iface:virbr1 ExpiryTime:2025-06-30 16:02:38 +0000 UTC Type:0 Mac:52:54:00:1a:38:98 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-232047-m02 Clientid:01:52:54:00:1a:38:98}
	I0630 15:04:18.470440 1491832 main.go:141] libmachine: (multinode-232047-m02) DBG | domain multinode-232047-m02 has defined IP address 192.168.39.50 and MAC address 52:54:00:1a:38:98 in network mk-multinode-232047
	I0630 15:04:18.470551 1491832 main.go:141] libmachine: (multinode-232047-m02) Calling .GetSSHPort
	I0630 15:04:18.470731 1491832 main.go:141] libmachine: (multinode-232047-m02) Calling .GetSSHKeyPath
	I0630 15:04:18.470878 1491832 main.go:141] libmachine: (multinode-232047-m02) Calling .GetSSHUsername
	I0630 15:04:18.471021 1491832 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20991-1452140/.minikube/machines/multinode-232047-m02/id_rsa Username:docker}
	I0630 15:04:18.562747 1491832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0630 15:04:18.580143 1491832 status.go:176] multinode-232047-m02 status: &{Name:multinode-232047-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:04:18.580200 1491832 status.go:174] checking status of multinode-232047-m03 ...
	I0630 15:04:18.580578 1491832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:04:18.580630 1491832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:04:18.596919 1491832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42971
	I0630 15:04:18.597546 1491832 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:04:18.598229 1491832 main.go:141] libmachine: Using API Version  1
	I0630 15:04:18.598253 1491832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:04:18.598760 1491832 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:04:18.598998 1491832 main.go:141] libmachine: (multinode-232047-m03) Calling .GetState
	I0630 15:04:18.601057 1491832 status.go:371] multinode-232047-m03 host status = "Stopped" (err=<nil>)
	I0630 15:04:18.601076 1491832 status.go:384] host is not running, skipping remaining checks
	I0630 15:04:18.601083 1491832 status.go:176] multinode-232047-m03 status: &{Name:multinode-232047-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-232047 node start m03 -v=5 --alsologtostderr: (35.7831605s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-232047
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-232047
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-232047: (3m3.049049106s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232047 --wait=true -v=5 --alsologtostderr
E0630 15:08:24.221469 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:09:05.493828 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232047 --wait=true -v=5 --alsologtostderr: (2m7.339282952s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-232047
--- PASS: TestMultiNode/serial/RestartKeepsNodes (310.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-232047 node delete m03: (1.791501582s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 stop
E0630 15:12:08.564706 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-232047 stop: (3m1.797319822s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232047 status: exit status 7 (108.465116ms)

                                                
                                                
-- stdout --
	multinode-232047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-232047-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr: exit status 7 (100.529536ms)

                                                
                                                
-- stdout --
	multinode-232047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-232047-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:13:09.901136 1494516 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:13:09.901422 1494516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:13:09.901438 1494516 out.go:358] Setting ErrFile to fd 2...
	I0630 15:13:09.901442 1494516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:13:09.901670 1494516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 15:13:09.901861 1494516 out.go:352] Setting JSON to false
	I0630 15:13:09.901900 1494516 mustload.go:65] Loading cluster: multinode-232047
	I0630 15:13:09.902097 1494516 notify.go:220] Checking for updates...
	I0630 15:13:09.902289 1494516 config.go:182] Loaded profile config "multinode-232047": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 15:13:09.902311 1494516 status.go:174] checking status of multinode-232047 ...
	I0630 15:13:09.902781 1494516 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:13:09.902830 1494516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:13:09.921589 1494516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0630 15:13:09.922209 1494516 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:13:09.922807 1494516 main.go:141] libmachine: Using API Version  1
	I0630 15:13:09.922847 1494516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:13:09.923281 1494516 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:13:09.923647 1494516 main.go:141] libmachine: (multinode-232047) Calling .GetState
	I0630 15:13:09.925789 1494516 status.go:371] multinode-232047 host status = "Stopped" (err=<nil>)
	I0630 15:13:09.925808 1494516 status.go:384] host is not running, skipping remaining checks
	I0630 15:13:09.925814 1494516 status.go:176] multinode-232047 status: &{Name:multinode-232047 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0630 15:13:09.925835 1494516 status.go:174] checking status of multinode-232047-m02 ...
	I0630 15:13:09.926131 1494516 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20991-1452140/.minikube/bin/docker-machine-driver-kvm2
	I0630 15:13:09.926177 1494516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0630 15:13:09.942864 1494516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46037
	I0630 15:13:09.943421 1494516 main.go:141] libmachine: () Calling .GetVersion
	I0630 15:13:09.943902 1494516 main.go:141] libmachine: Using API Version  1
	I0630 15:13:09.943930 1494516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0630 15:13:09.944300 1494516 main.go:141] libmachine: () Calling .GetMachineName
	I0630 15:13:09.944615 1494516 main.go:141] libmachine: (multinode-232047-m02) Calling .GetState
	I0630 15:13:09.947064 1494516 status.go:371] multinode-232047-m02 host status = "Stopped" (err=<nil>)
	I0630 15:13:09.947083 1494516 status.go:384] host is not running, skipping remaining checks
	I0630 15:13:09.947091 1494516 status.go:176] multinode-232047-m02 status: &{Name:multinode-232047-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232047 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0630 15:13:24.222715 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:14:05.493809 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232047 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m25.999209228s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232047 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-232047
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232047-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-232047-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.020393ms)

                                                
                                                
-- stdout --
	* [multinode-232047-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-232047-m02' is duplicated with machine name 'multinode-232047-m02' in profile 'multinode-232047'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232047-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232047-m03 --driver=kvm2  --container-runtime=containerd: (49.426228431s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-232047
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-232047: exit status 80 (240.320522ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-232047 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-232047-m03 already exists in multinode-232047-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-232047-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.67s)

                                                
                                    
x
+
TestPreload (228.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-048927 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-048927 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m22.607458521s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-048927 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-048927 image pull gcr.io/k8s-minikube/busybox: (1.78757872s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-048927
E0630 15:18:07.293416 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:18:24.228657 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-048927: (1m31.025902213s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-048927 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0630 15:19:05.493910 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-048927 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (51.659581003s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-048927 image list
helpers_test.go:175: Cleaning up "test-preload-048927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-048927
--- PASS: TestPreload (228.23s)

                                                
                                    
x
+
TestScheduledStopUnix (119.62s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-279521 --memory=3072 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-279521 --memory=3072 --driver=kvm2  --container-runtime=containerd: (47.806046062s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279521 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-279521 -n scheduled-stop-279521
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279521 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0630 15:20:05.471878 1459494 retry.go:31] will retry after 124.372µs: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.473083 1459494 retry.go:31] will retry after 216.91µs: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.474273 1459494 retry.go:31] will retry after 230.209µs: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.475396 1459494 retry.go:31] will retry after 460.322µs: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.476570 1459494 retry.go:31] will retry after 523.671µs: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.477717 1459494 retry.go:31] will retry after 382.223µs: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.478892 1459494 retry.go:31] will retry after 570.965µs: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.480067 1459494 retry.go:31] will retry after 2.109035ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.483311 1459494 retry.go:31] will retry after 3.534419ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.487547 1459494 retry.go:31] will retry after 3.092026ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.490734 1459494 retry.go:31] will retry after 7.801827ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.499167 1459494 retry.go:31] will retry after 6.027091ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.505408 1459494 retry.go:31] will retry after 10.726839ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.516686 1459494 retry.go:31] will retry after 26.03302ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
I0630 15:20:05.542924 1459494 retry.go:31] will retry after 29.009287ms: open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/scheduled-stop-279521/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279521 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-279521 -n scheduled-stop-279521
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-279521
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-279521 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-279521
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-279521: exit status 7 (81.141226ms)

                                                
                                                
-- stdout --
	scheduled-stop-279521
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-279521 -n scheduled-stop-279521
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-279521 -n scheduled-stop-279521: exit status 7 (72.004717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-279521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-279521
--- PASS: TestScheduledStopUnix (119.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (204.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2557612184 start -p running-upgrade-097362 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2557612184 start -p running-upgrade-097362 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (1m57.118336765s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-097362 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-097362 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m24.961553889s)
helpers_test.go:175: Cleaning up "running-upgrade-097362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-097362
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-097362: (1.326244526s)
--- PASS: TestRunningBinaryUpgrade (204.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (208.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m36.167121762s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-688995
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-688995: (2.521986192s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-688995 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-688995 status --format={{.Host}}: exit status 7 (109.260066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0630 15:24:05.494127 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (39.579924487s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-688995 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (88.381599ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-688995] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.33.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-688995
	    minikube start -p kubernetes-upgrade-688995 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6889952 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.33.2, by running:
	    
	    minikube start -p kubernetes-upgrade-688995 --kubernetes-version=v1.33.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-688995 --memory=3072 --kubernetes-version=v1.33.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m9.239075851s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-688995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-688995
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-688995: (1.063172147s)
--- PASS: TestKubernetesUpgrade (208.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-399039 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-399039 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (87.674014ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-399039] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-399039 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-399039 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m40.161035632s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-399039 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-897324 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-897324 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (133.451754ms)

                                                
                                                
-- stdout --
	* [false-897324] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0630 15:21:20.279180 1499384 out.go:345] Setting OutFile to fd 1 ...
	I0630 15:21:20.279365 1499384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:21:20.279382 1499384 out.go:358] Setting ErrFile to fd 2...
	I0630 15:21:20.279388 1499384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0630 15:21:20.279734 1499384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20991-1452140/.minikube/bin
	I0630 15:21:20.280758 1499384 out.go:352] Setting JSON to false
	I0630 15:21:20.282305 1499384 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":54203,"bootTime":1751242677,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0630 15:21:20.282511 1499384 start.go:140] virtualization: kvm guest
	I0630 15:21:20.284903 1499384 out.go:177] * [false-897324] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0630 15:21:20.287325 1499384 notify.go:220] Checking for updates...
	I0630 15:21:20.287356 1499384 out.go:177]   - MINIKUBE_LOCATION=20991
	I0630 15:21:20.288960 1499384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0630 15:21:20.290604 1499384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20991-1452140/kubeconfig
	I0630 15:21:20.292099 1499384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20991-1452140/.minikube
	I0630 15:21:20.293478 1499384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0630 15:21:20.294743 1499384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0630 15:21:20.296944 1499384 config.go:182] Loaded profile config "NoKubernetes-399039": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 15:21:20.297101 1499384 config.go:182] Loaded profile config "force-systemd-env-519372": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 15:21:20.297259 1499384 config.go:182] Loaded profile config "offline-containerd-345672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
	I0630 15:21:20.297414 1499384 driver.go:404] Setting default libvirt URI to qemu:///system
	I0630 15:21:20.339582 1499384 out.go:177] * Using the kvm2 driver based on user configuration
	I0630 15:21:20.341123 1499384 start.go:304] selected driver: kvm2
	I0630 15:21:20.341209 1499384 start.go:908] validating driver "kvm2" against <nil>
	I0630 15:21:20.341237 1499384 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0630 15:21:20.343384 1499384 out.go:201] 
	W0630 15:21:20.344680 1499384 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0630 15:21:20.345967 1499384 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-897324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-897324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-897324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-897324"

                                                
                                                
----------------------- debugLogs end: false-897324 [took: 3.173323774s] --------------------------------
helpers_test.go:175: Cleaning up "false-897324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-897324
--- PASS: TestNetworkPlugins/group/false (3.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (182.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.154065093 start -p stopped-upgrade-504234 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.154065093 start -p stopped-upgrade-504234 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (1m46.49730773s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.154065093 -p stopped-upgrade-504234 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.154065093 -p stopped-upgrade-504234 stop: (1.361514371s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-504234 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-504234 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m14.510013458s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (182.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (75.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-399039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0630 15:23:24.221409 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-399039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m13.640252206s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-399039 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-399039 status -o json: exit status 2 (305.044076ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-399039","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-399039
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-399039: (1.088550382s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (75.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-399039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-399039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (36.67575635s)
--- PASS: TestNoKubernetes/serial/Start (36.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-399039 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-399039 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.638644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.026180817s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.073115524s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.10s)

                                                
                                    
x
+
TestPause/serial/Start (83.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-955284 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-955284 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m23.141799979s)
--- PASS: TestPause/serial/Start (83.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-399039
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-399039: (1.589035394s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (60.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-399039 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-399039 --driver=kvm2  --container-runtime=containerd: (1m0.681596499s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (60.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-504234
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-399039 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-399039 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.42859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (110.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-955284 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-955284 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m50.054050608s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (110.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m30.995737925s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (77.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m17.354876015s)
--- PASS: TestNetworkPlugins/group/flannel/Start (77.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-955284 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-955284 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-955284 --output=json --layout=cluster: exit status 2 (293.644709ms)

                                                
                                                
-- stdout --
	{"Name":"pause-955284","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-955284","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-955284 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-955284 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-955284 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-955284 --alsologtostderr -v=5: (1.119055976s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.389953329s)
--- PASS: TestPause/serial/VerifyDeletedResources (2.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m8.880030341s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-897324 "pgrep -a kubelet"
I0630 15:28:20.240523 1459494 config.go:182] Loaded profile config "auto-897324": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-897324 replace --force -f testdata/netcat-deployment.yaml
I0630 15:28:20.550436 1459494 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sxb2w" [80870eaf-cbe1-4d15-967a-5f33d8db6995] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sxb2w" [80870eaf-cbe1-4d15-967a-5f33d8db6995] Running
E0630 15:28:24.222037 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.006437967s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-897324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0630 15:28:48.566820 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m7.274585236s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6g57h" [34f19790-5048-49c4-adac-fca87112687f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00409105s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-897324 "pgrep -a kubelet"
I0630 15:29:04.311176 1459494 config.go:182] Loaded profile config "flannel-897324": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-897324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hxm7b" [6007f5de-a7b5-4d03-927e-e3385dfd49dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0630 15:29:05.494481 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "netcat-5d86dc444-hxm7b" [6007f5de-a7b5-4d03-927e-e3385dfd49dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003972788s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-897324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-897324 "pgrep -a kubelet"
I0630 15:29:21.918738 1459494 config.go:182] Loaded profile config "enable-default-cni-897324": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-897324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-w54ll" [38f2f817-7b69-419a-947c-720ea63e4bbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-w54ll" [38f2f817-7b69-419a-947c-720ea63e4bbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004145096s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-897324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m24.574685195s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m26.204637917s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-897324 "pgrep -a kubelet"
I0630 15:29:53.385675 1459494 config.go:182] Loaded profile config "bridge-897324": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-897324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4fspw" [f49e25fc-08c5-478b-a251-059ccf794323] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4fspw" [f49e25fc-08c5-478b-a251-059ccf794323] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004056269s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-897324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-897324 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m35.662576908s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4mgcq" [a8a108a1-b270-4d69-8f15-be4e6f9deeef] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-4mgcq" [a8a108a1-b270-4d69-8f15-be4e6f9deeef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005795595s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-897324 "pgrep -a kubelet"
I0630 15:31:03.547652 1459494 config.go:182] Loaded profile config "calico-897324": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-897324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5vsbh" [0ffdbd30-5d2b-4cad-b733-45d0db867f7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5vsbh" [0ffdbd30-5d2b-4cad-b733-45d0db867f7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005955549s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (150.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-108943 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-108943 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m30.217189099s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (150.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-897324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mhxxw" [922561cc-6c3d-4c3e-89d7-a7c04029251a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004930752s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-897324 "pgrep -a kubelet"
I0630 15:31:21.779804 1459494 config.go:182] Loaded profile config "kindnet-897324": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-897324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9nllv" [f5c21f49-0274-45e1-b704-c873c97dedbc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9nllv" [f5c21f49-0274-45e1-b704-c873c97dedbc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004571664s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-897324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-043396 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-043396 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (1m29.566990984s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-663009 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-663009 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (1m26.30296466s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-897324 "pgrep -a kubelet"
I0630 15:31:56.038495 1459494 config.go:182] Loaded profile config "custom-flannel-897324": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-897324 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-84svk" [415dc648-3993-42b8-8af2-f21e0dca1b34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-84svk" [415dc648-3993-42b8-8af2-f21e0dca1b34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003861861s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-897324 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-897324 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)
E0630 15:36:07.555444 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-615048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-615048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (1m13.828842977s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-043396 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [39eca42c-49e0-4e6d-8300-11ce210c7b29] Pending
helpers_test.go:344: "busybox" [39eca42c-49e0-4e6d-8300-11ce210c7b29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [39eca42c-49e0-4e6d-8300-11ce210c7b29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004514291s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-043396 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-043396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-043396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.246664507s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-043396 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-043396 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-043396 --alsologtostderr -v=3: (1m31.113816191s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-663009 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [962770c3-2db6-4abe-bbc5-eb65cfdb3383] Pending
helpers_test.go:344: "busybox" [962770c3-2db6-4abe-bbc5-eb65cfdb3383] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0630 15:33:20.536054 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:20.542508 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:20.553986 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:20.575998 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:20.617540 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:20.699138 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:20.860775 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:21.182923 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "busybox" [962770c3-2db6-4abe-bbc5-eb65cfdb3383] Running
E0630 15:33:21.824591 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:23.106019 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:24.221421 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:25.667948 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004020628s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-663009 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-663009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-663009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014483866s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-663009 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-663009 --alsologtostderr -v=3
E0630 15:33:30.790143 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-663009 --alsologtostderr -v=3: (1m31.062262231s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-615048 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0303bd6d-232f-4f52-be39-943272ce01a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0630 15:33:41.031506 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:344: "busybox" [0303bd6d-232f-4f52-be39-943272ce01a1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004201722s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-615048 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-108943 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4990c227-04d9-4d9f-9d10-c06ace9ac9d2] Pending
helpers_test.go:344: "busybox" [4990c227-04d9-4d9f-9d10-c06ace9ac9d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4990c227-04d9-4d9f-9d10-c06ace9ac9d2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003834924s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-108943 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-615048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-615048 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-615048 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-615048 --alsologtostderr -v=3: (1m30.868515984s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-108943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-108943 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-108943 --alsologtostderr -v=3
E0630 15:33:58.084633 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:58.091040 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:58.102510 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:58.123973 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:58.165523 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:58.247188 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:58.408892 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:58.730359 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:33:59.372879 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:00.654861 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:01.513758 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:03.217264 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:05.494448 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/functional-125151/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:08.339506 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:18.581650 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.211773 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.218295 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.229806 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.251316 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.292812 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.374358 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.536064 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:22.858243 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:23.500624 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:24.782410 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:27.344052 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:32.465852 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:39.063483 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:42.475933 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/auto-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:42.708083 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-108943 --alsologtostderr -v=3: (1m31.15741857s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043396 -n no-preload-043396
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043396 -n no-preload-043396: exit status 7 (77.920774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-043396 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-043396 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
E0630 15:34:47.295310 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/addons-412730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:53.606955 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:53.613487 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:53.625032 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:53.646613 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:53.688366 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:53.770605 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:53.932302 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:54.253675 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:54.895592 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:34:56.176941 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-043396 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (46.141912611s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-043396 -n no-preload-043396
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-663009 -n embed-certs-663009
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-663009 -n embed-certs-663009: exit status 7 (69.819626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-663009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0630 15:34:58.738992 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-663009 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
E0630 15:35:03.189697 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:03.860409 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:14.102769 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-663009 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (50.651209343s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-663009 -n embed-certs-663009
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048: exit status 7 (78.301917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-615048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-615048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
E0630 15:35:20.025747 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-615048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (52.290376136s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108943 -n old-k8s-version-108943
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108943 -n old-k8s-version-108943: exit status 7 (110.02655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-108943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (148.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-108943 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-108943 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m28.117793891s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108943 -n old-k8s-version-108943
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (148.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-77bvb" [a29dc010-109f-4230-be0b-1a4936d50982] Running
E0630 15:35:34.584718 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007373904s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-77bvb" [a29dc010-109f-4230-be0b-1a4936d50982] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004605625s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-043396 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-043396 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-043396 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043396 -n no-preload-043396
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043396 -n no-preload-043396: exit status 2 (318.95996ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-043396 -n no-preload-043396
E0630 15:35:44.151981 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-043396 -n no-preload-043396: exit status 2 (314.216769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-043396 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-043396 --alsologtostderr -v=1: (1.082494382s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-043396 -n no-preload-043396
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-043396 -n no-preload-043396
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (70.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-543415 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-543415 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (1m10.295132835s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (70.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9r4qw" [d8748f52-2c6d-47a5-b85e-fe5678dd0970] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004782272s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9r4qw" [d8748f52-2c6d-47a5-b85e-fe5678dd0970] Running
E0630 15:35:57.299842 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:57.306389 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:57.317940 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:57.339556 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:57.381145 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:57.463569 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:57.625771 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:57.947936 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:58.590057 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:35:59.871788 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004604724s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-663009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-663009 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-663009 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-663009 -n embed-certs-663009
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-663009 -n embed-certs-663009: exit status 2 (261.273836ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-663009 -n embed-certs-663009
E0630 15:36:02.433261 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-663009 -n embed-certs-663009: exit status 2 (256.678627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-663009 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-663009 -n embed-certs-663009
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-663009 -n embed-certs-663009
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mjpr7" [6a1d550b-9c56-4a75-916d-511c6e441050] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mjpr7" [6a1d550b-9c56-4a75-916d-511c6e441050] Running
E0630 15:36:15.505890 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:15.512461 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:15.523870 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:15.545418 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:15.546592 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:15.587192 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:15.668678 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:15.830266 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:16.152510 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:16.794069 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:17.797524 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:36:18.076439 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.00629933s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mjpr7" [6a1d550b-9c56-4a75-916d-511c6e441050] Running
E0630 15:36:20.637976 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005146223s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-615048 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-615048 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-615048 --alsologtostderr -v=1
E0630 15:36:25.759754 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048: exit status 2 (283.050018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048: exit status 2 (281.652293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-615048 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-615048 -n default-k8s-diff-port-615048
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-543415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0630 15:36:58.843724 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/custom-flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-543415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.125968302s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-543415 --alsologtostderr -v=3
E0630 15:37:01.405720 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/custom-flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:37:06.074317 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/enable-default-cni-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:37:06.527356 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/custom-flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-543415 --alsologtostderr -v=3: (7.374546983s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-543415 -n newest-cni-543415
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-543415 -n newest-cni-543415: exit status 7 (79.01015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-543415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-543415 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2
E0630 15:37:16.769699 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/custom-flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:37:19.241797 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/calico-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:37:37.251882 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/custom-flannel-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:37:37.444861 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/kindnet-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:37:37.468459 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/bridge-897324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-543415 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.2: (38.101586829s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-543415 -n newest-cni-543415
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-543415 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-543415 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-543415 -n newest-cni-543415
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-543415 -n newest-cni-543415: exit status 2 (253.527856ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-543415 -n newest-cni-543415
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-543415 -n newest-cni-543415: exit status 2 (250.865247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-543415 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-543415 -n newest-cni-543415
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-543415 -n newest-cni-543415
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5wpqn" [c51919f3-71de-48e6-8458-d0009b84c7d1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004482631s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5wpqn" [c51919f3-71de-48e6-8458-d0009b84c7d1] Running
E0630 15:38:02.965310 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:38:02.971756 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:38:02.983207 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:38:03.004720 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:38:03.046260 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:38:03.127964 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:38:03.289783 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0630 15:38:03.611661 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004665192s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-108943 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-108943 image list --format=json
E0630 15:38:04.253684 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-108943 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108943 -n old-k8s-version-108943
E0630 15:38:05.535737 1459494 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/20991-1452140/.minikube/profiles/no-preload-043396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108943 -n old-k8s-version-108943: exit status 2 (263.698929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-108943 -n old-k8s-version-108943
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-108943 -n old-k8s-version-108943: exit status 2 (264.026558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-108943 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108943 -n old-k8s-version-108943
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-108943 -n old-k8s-version-108943
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.76s)

                                                
                                    

Test skip (39/330)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.33.2/cached-images 0
15 TestDownloadOnly/v1.33.2/binaries 0
16 TestDownloadOnly/v1.33.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.32
267 TestNetworkPlugins/group/cilium 3.6
279 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.33.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.33.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.33.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-897324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-897324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-897324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-897324"

                                                
                                                
----------------------- debugLogs end: kubenet-897324 [took: 3.160738542s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-897324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-897324
--- SKIP: TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-897324 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-897324" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-897324

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-897324" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897324"

                                                
                                                
----------------------- debugLogs end: cilium-897324 [took: 3.438833907s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-897324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-897324
--- SKIP: TestNetworkPlugins/group/cilium (3.60s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-465913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-465913
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard