Test Report: KVM_Linux_containerd 20720

                    
                      b7440dc9e9eb90138d871b2ff610c46584e06ed3:2025-05-10:39516
                    
                

Test fail (5/329)

Order failed test Duration
36 TestAddons/parallel/Ingress 491.81
43 TestAddons/parallel/LocalPath 231.51
90 TestFunctional/parallel/DashboardCmd 302.2
99 TestFunctional/parallel/PersistentVolumeClaim 189.84
103 TestFunctional/parallel/MySQL 602.67
x
+
TestAddons/parallel/Ingress (491.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-661496 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-661496 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-661496 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fa098ebf-237d-4738-96c9-0bbde71445c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:250: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-661496 -n addons-661496
addons_test.go:250: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-05-10 17:52:38.459489921 +0000 UTC m=+826.103259959
addons_test.go:250: (dbg) Run:  kubectl --context addons-661496 describe po nginx -n default
addons_test.go:250: (dbg) kubectl --context addons-661496 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-661496/192.168.39.168
Start Time:       Sat, 10 May 2025 17:44:38 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.35
IPs:
IP:  10.244.0.35
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5ztn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j5ztn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/nginx to addons-661496
Normal   Pulling    5m2s (x5 over 8m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     4m59s (x5 over 7m58s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m59s (x5 over 7m58s)   kubelet            Error: ErrImagePull
Warning  Failed     2m51s (x20 over 7m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m40s (x21 over 7m57s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:250: (dbg) Run:  kubectl --context addons-661496 logs nginx -n default
addons_test.go:250: (dbg) Non-zero exit: kubectl --context addons-661496 logs nginx -n default: exit status 1 (71.905701ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:250: kubectl --context addons-661496 logs nginx -n default: exit status 1
addons_test.go:251: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-661496 -n addons-661496
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 logs -n 25: (1.238249245s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-685238              | download-only-685238 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| start   | -o=json --download-only              | download-only-932669 | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | -p download-only-932669              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-932669              | download-only-932669 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-685238              | download-only-685238 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-932669              | download-only-932669 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| start   | --download-only -p                   | binary-mirror-772258 | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | binary-mirror-772258                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39889               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-772258              | binary-mirror-772258 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| addons  | enable dashboard -p                  | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | addons-661496                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | addons-661496                        |                      |         |         |                     |                     |
	| start   | -p addons-661496 --wait=true         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:43 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:43 UTC | 10 May 25 17:43 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:43 UTC | 10 May 25 17:44 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | -p addons-661496                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-661496 ip                     | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:45 UTC | 10 May 25 17:45 UTC |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:45 UTC | 10 May 25 17:45 UTC |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:47 UTC | 10 May 25 17:47 UTC |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:39:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:39:30.720506 1172998 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:39:30.720759 1172998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:39:30.720769 1172998 out.go:358] Setting ErrFile to fd 2...
	I0510 17:39:30.720773 1172998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:39:30.720983 1172998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:39:30.721652 1172998 out.go:352] Setting JSON to false
	I0510 17:39:30.722607 1172998 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":19315,"bootTime":1746879456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:39:30.722729 1172998 start.go:140] virtualization: kvm guest
	I0510 17:39:30.724714 1172998 out.go:177] * [addons-661496] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:39:30.726285 1172998 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:39:30.726302 1172998 notify.go:220] Checking for updates...
	I0510 17:39:30.728697 1172998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:39:30.729927 1172998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:39:30.731180 1172998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:39:30.732364 1172998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:39:30.733647 1172998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:39:30.735138 1172998 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:39:30.766808 1172998 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 17:39:30.768483 1172998 start.go:304] selected driver: kvm2
	I0510 17:39:30.768498 1172998 start.go:908] validating driver "kvm2" against <nil>
	I0510 17:39:30.768511 1172998 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:39:30.769232 1172998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:39:30.769318 1172998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-1165049/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 17:39:30.784854 1172998 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 17:39:30.784902 1172998 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 17:39:30.785176 1172998 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 17:39:30.785208 1172998 cni.go:84] Creating CNI manager for ""
	I0510 17:39:30.785259 1172998 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:39:30.785268 1172998 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 17:39:30.785322 1172998 start.go:347] cluster config:
	{Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:39:30.785417 1172998 iso.go:125] acquiring lock: {Name:mkc65d6718a5a236dac4e9cf2d61c7062c63896e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:39:30.787251 1172998 out.go:177] * Starting "addons-661496" primary control-plane node in "addons-661496" cluster
	I0510 17:39:30.788371 1172998 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime containerd
	I0510 17:39:30.788416 1172998 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4
	I0510 17:39:30.788430 1172998 cache.go:56] Caching tarball of preloaded images
	I0510 17:39:30.788562 1172998 preload.go:172] Found /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0510 17:39:30.788579 1172998 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on containerd
	I0510 17:39:30.788888 1172998 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/config.json ...
	I0510 17:39:30.788915 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/config.json: {Name:mkfaa167b5e6079cbdf7c27a2f4d987819f61e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:39:30.789104 1172998 start.go:360] acquireMachinesLock for addons-661496: {Name:mk94a427f3fc363027a2f9c3c99b3847312d5b6e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 17:39:30.789166 1172998 start.go:364] duration metric: took 44.744µs to acquireMachinesLock for "addons-661496"
	I0510 17:39:30.789206 1172998 start.go:93] Provisioning new machine with config: &{Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0510 17:39:30.789259 1172998 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 17:39:30.790777 1172998 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0510 17:39:30.790969 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:39:30.791011 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:39:30.805778 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0510 17:39:30.806313 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:39:30.806909 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:39:30.806933 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:39:30.807377 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:39:30.807583 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:30.807759 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:30.807902 1172998 start.go:159] libmachine.API.Create for "addons-661496" (driver="kvm2")
	I0510 17:39:30.807938 1172998 client.go:168] LocalClient.Create starting
	I0510 17:39:30.807977 1172998 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem
	I0510 17:39:30.863328 1172998 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem
	I0510 17:39:31.017621 1172998 main.go:141] libmachine: Running pre-create checks...
	I0510 17:39:31.017649 1172998 main.go:141] libmachine: (addons-661496) Calling .PreCreateCheck
	I0510 17:39:31.018175 1172998 main.go:141] libmachine: (addons-661496) Calling .GetConfigRaw
	I0510 17:39:31.018694 1172998 main.go:141] libmachine: Creating machine...
	I0510 17:39:31.018711 1172998 main.go:141] libmachine: (addons-661496) Calling .Create
	I0510 17:39:31.018936 1172998 main.go:141] libmachine: (addons-661496) creating KVM machine...
	I0510 17:39:31.018948 1172998 main.go:141] libmachine: (addons-661496) creating network...
	I0510 17:39:31.020305 1172998 main.go:141] libmachine: (addons-661496) DBG | found existing default KVM network
	I0510 17:39:31.020987 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.020850 1173020 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208dd0}
	I0510 17:39:31.021031 1172998 main.go:141] libmachine: (addons-661496) DBG | created network xml: 
	I0510 17:39:31.021047 1172998 main.go:141] libmachine: (addons-661496) DBG | <network>
	I0510 17:39:31.021056 1172998 main.go:141] libmachine: (addons-661496) DBG |   <name>mk-addons-661496</name>
	I0510 17:39:31.021065 1172998 main.go:141] libmachine: (addons-661496) DBG |   <dns enable='no'/>
	I0510 17:39:31.021072 1172998 main.go:141] libmachine: (addons-661496) DBG |   
	I0510 17:39:31.021081 1172998 main.go:141] libmachine: (addons-661496) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0510 17:39:31.021093 1172998 main.go:141] libmachine: (addons-661496) DBG |     <dhcp>
	I0510 17:39:31.021102 1172998 main.go:141] libmachine: (addons-661496) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0510 17:39:31.021114 1172998 main.go:141] libmachine: (addons-661496) DBG |     </dhcp>
	I0510 17:39:31.021136 1172998 main.go:141] libmachine: (addons-661496) DBG |   </ip>
	I0510 17:39:31.021159 1172998 main.go:141] libmachine: (addons-661496) DBG |   
	I0510 17:39:31.021175 1172998 main.go:141] libmachine: (addons-661496) DBG | </network>
	I0510 17:39:31.021193 1172998 main.go:141] libmachine: (addons-661496) DBG | 
	I0510 17:39:31.026704 1172998 main.go:141] libmachine: (addons-661496) DBG | trying to create private KVM network mk-addons-661496 192.168.39.0/24...
	I0510 17:39:31.093110 1172998 main.go:141] libmachine: (addons-661496) DBG | private KVM network mk-addons-661496 192.168.39.0/24 created
	I0510 17:39:31.093148 1172998 main.go:141] libmachine: (addons-661496) setting up store path in /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496 ...
	I0510 17:39:31.093164 1172998 main.go:141] libmachine: (addons-661496) building disk image from file:///home/jenkins/minikube-integration/20720-1165049/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 17:39:31.093206 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.093077 1173020 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:39:31.093306 1172998 main.go:141] libmachine: (addons-661496) Downloading /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-1165049/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 17:39:31.406155 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.406017 1173020 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa...
	I0510 17:39:31.568126 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.567921 1173020 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/addons-661496.rawdisk...
	I0510 17:39:31.568190 1172998 main.go:141] libmachine: (addons-661496) DBG | Writing magic tar header
	I0510 17:39:31.568205 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496 (perms=drwx------)
	I0510 17:39:31.568221 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049/.minikube/machines (perms=drwxr-xr-x)
	I0510 17:39:31.568232 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049/.minikube (perms=drwxr-xr-x)
	I0510 17:39:31.568239 1172998 main.go:141] libmachine: (addons-661496) DBG | Writing SSH key tar header
	I0510 17:39:31.568272 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049 (perms=drwxrwxr-x)
	I0510 17:39:31.568320 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.568041 1173020 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496 ...
	I0510 17:39:31.568331 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 17:39:31.568341 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 17:39:31.568346 1172998 main.go:141] libmachine: (addons-661496) creating domain...
	I0510 17:39:31.568356 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496
	I0510 17:39:31.568365 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines
	I0510 17:39:31.568378 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:39:31.568392 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049
	I0510 17:39:31.568403 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 17:39:31.568412 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins
	I0510 17:39:31.568421 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home
	I0510 17:39:31.568428 1172998 main.go:141] libmachine: (addons-661496) DBG | skipping /home - not owner
	I0510 17:39:31.569387 1172998 main.go:141] libmachine: (addons-661496) define libvirt domain using xml: 
	I0510 17:39:31.569404 1172998 main.go:141] libmachine: (addons-661496) <domain type='kvm'>
	I0510 17:39:31.569411 1172998 main.go:141] libmachine: (addons-661496)   <name>addons-661496</name>
	I0510 17:39:31.569416 1172998 main.go:141] libmachine: (addons-661496)   <memory unit='MiB'>4000</memory>
	I0510 17:39:31.569421 1172998 main.go:141] libmachine: (addons-661496)   <vcpu>2</vcpu>
	I0510 17:39:31.569425 1172998 main.go:141] libmachine: (addons-661496)   <features>
	I0510 17:39:31.569432 1172998 main.go:141] libmachine: (addons-661496)     <acpi/>
	I0510 17:39:31.569439 1172998 main.go:141] libmachine: (addons-661496)     <apic/>
	I0510 17:39:31.569446 1172998 main.go:141] libmachine: (addons-661496)     <pae/>
	I0510 17:39:31.569473 1172998 main.go:141] libmachine: (addons-661496)     
	I0510 17:39:31.569504 1172998 main.go:141] libmachine: (addons-661496)   </features>
	I0510 17:39:31.569531 1172998 main.go:141] libmachine: (addons-661496)   <cpu mode='host-passthrough'>
	I0510 17:39:31.569560 1172998 main.go:141] libmachine: (addons-661496)   
	I0510 17:39:31.569569 1172998 main.go:141] libmachine: (addons-661496)   </cpu>
	I0510 17:39:31.569574 1172998 main.go:141] libmachine: (addons-661496)   <os>
	I0510 17:39:31.569580 1172998 main.go:141] libmachine: (addons-661496)     <type>hvm</type>
	I0510 17:39:31.569586 1172998 main.go:141] libmachine: (addons-661496)     <boot dev='cdrom'/>
	I0510 17:39:31.569593 1172998 main.go:141] libmachine: (addons-661496)     <boot dev='hd'/>
	I0510 17:39:31.569601 1172998 main.go:141] libmachine: (addons-661496)     <bootmenu enable='no'/>
	I0510 17:39:31.569611 1172998 main.go:141] libmachine: (addons-661496)   </os>
	I0510 17:39:31.569631 1172998 main.go:141] libmachine: (addons-661496)   <devices>
	I0510 17:39:31.569650 1172998 main.go:141] libmachine: (addons-661496)     <disk type='file' device='cdrom'>
	I0510 17:39:31.569662 1172998 main.go:141] libmachine: (addons-661496)       <source file='/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/boot2docker.iso'/>
	I0510 17:39:31.569672 1172998 main.go:141] libmachine: (addons-661496)       <target dev='hdc' bus='scsi'/>
	I0510 17:39:31.569680 1172998 main.go:141] libmachine: (addons-661496)       <readonly/>
	I0510 17:39:31.569687 1172998 main.go:141] libmachine: (addons-661496)     </disk>
	I0510 17:39:31.569695 1172998 main.go:141] libmachine: (addons-661496)     <disk type='file' device='disk'>
	I0510 17:39:31.569706 1172998 main.go:141] libmachine: (addons-661496)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 17:39:31.569729 1172998 main.go:141] libmachine: (addons-661496)       <source file='/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/addons-661496.rawdisk'/>
	I0510 17:39:31.569750 1172998 main.go:141] libmachine: (addons-661496)       <target dev='hda' bus='virtio'/>
	I0510 17:39:31.569762 1172998 main.go:141] libmachine: (addons-661496)     </disk>
	I0510 17:39:31.569773 1172998 main.go:141] libmachine: (addons-661496)     <interface type='network'>
	I0510 17:39:31.569786 1172998 main.go:141] libmachine: (addons-661496)       <source network='mk-addons-661496'/>
	I0510 17:39:31.569795 1172998 main.go:141] libmachine: (addons-661496)       <model type='virtio'/>
	I0510 17:39:31.569807 1172998 main.go:141] libmachine: (addons-661496)     </interface>
	I0510 17:39:31.569815 1172998 main.go:141] libmachine: (addons-661496)     <interface type='network'>
	I0510 17:39:31.569825 1172998 main.go:141] libmachine: (addons-661496)       <source network='default'/>
	I0510 17:39:31.569836 1172998 main.go:141] libmachine: (addons-661496)       <model type='virtio'/>
	I0510 17:39:31.569871 1172998 main.go:141] libmachine: (addons-661496)     </interface>
	I0510 17:39:31.569898 1172998 main.go:141] libmachine: (addons-661496)     <serial type='pty'>
	I0510 17:39:31.569908 1172998 main.go:141] libmachine: (addons-661496)       <target port='0'/>
	I0510 17:39:31.569915 1172998 main.go:141] libmachine: (addons-661496)     </serial>
	I0510 17:39:31.569923 1172998 main.go:141] libmachine: (addons-661496)     <console type='pty'>
	I0510 17:39:31.569932 1172998 main.go:141] libmachine: (addons-661496)       <target type='serial' port='0'/>
	I0510 17:39:31.569940 1172998 main.go:141] libmachine: (addons-661496)     </console>
	I0510 17:39:31.569961 1172998 main.go:141] libmachine: (addons-661496)     <rng model='virtio'>
	I0510 17:39:31.569976 1172998 main.go:141] libmachine: (addons-661496)       <backend model='random'>/dev/random</backend>
	I0510 17:39:31.569989 1172998 main.go:141] libmachine: (addons-661496)     </rng>
	I0510 17:39:31.570001 1172998 main.go:141] libmachine: (addons-661496)     
	I0510 17:39:31.570010 1172998 main.go:141] libmachine: (addons-661496)     
	I0510 17:39:31.570018 1172998 main.go:141] libmachine: (addons-661496)   </devices>
	I0510 17:39:31.570027 1172998 main.go:141] libmachine: (addons-661496) </domain>
	I0510 17:39:31.570039 1172998 main.go:141] libmachine: (addons-661496) 
	I0510 17:39:31.575914 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:4e:79:e3 in network default
	I0510 17:39:31.576613 1172998 main.go:141] libmachine: (addons-661496) starting domain...
	I0510 17:39:31.576637 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:31.576642 1172998 main.go:141] libmachine: (addons-661496) ensuring networks are active...
	I0510 17:39:31.577385 1172998 main.go:141] libmachine: (addons-661496) Ensuring network default is active
	I0510 17:39:31.577737 1172998 main.go:141] libmachine: (addons-661496) Ensuring network mk-addons-661496 is active
	I0510 17:39:31.578199 1172998 main.go:141] libmachine: (addons-661496) getting domain XML...
	I0510 17:39:31.578836 1172998 main.go:141] libmachine: (addons-661496) creating domain...
	I0510 17:39:32.982410 1172998 main.go:141] libmachine: (addons-661496) waiting for IP...
	I0510 17:39:32.983172 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:32.983564 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:32.983619 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:32.983572 1173020 retry.go:31] will retry after 216.769661ms: waiting for domain to come up
	I0510 17:39:33.202181 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:33.202673 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:33.202732 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:33.202651 1173020 retry.go:31] will retry after 340.808751ms: waiting for domain to come up
	I0510 17:39:33.545470 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:33.545971 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:33.546011 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:33.545960 1173020 retry.go:31] will retry after 483.379709ms: waiting for domain to come up
	I0510 17:39:34.030801 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:34.031259 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:34.031287 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:34.031231 1173020 retry.go:31] will retry after 552.15185ms: waiting for domain to come up
	I0510 17:39:34.585072 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:34.585659 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:34.585693 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:34.585606 1173020 retry.go:31] will retry after 664.178924ms: waiting for domain to come up
	I0510 17:39:35.251679 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:35.252266 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:35.252296 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:35.252245 1173020 retry.go:31] will retry after 776.32739ms: waiting for domain to come up
	I0510 17:39:36.029991 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:36.030564 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:36.030590 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:36.030494 1173020 retry.go:31] will retry after 1.081819112s: waiting for domain to come up
	I0510 17:39:37.113967 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:37.114443 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:37.114506 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:37.114406 1173020 retry.go:31] will retry after 1.462566483s: waiting for domain to come up
	I0510 17:39:38.579064 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:38.579515 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:38.579595 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:38.579490 1173020 retry.go:31] will retry after 1.342534125s: waiting for domain to come up
	I0510 17:39:39.924363 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:39.924862 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:39.924893 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:39.924817 1173020 retry.go:31] will retry after 1.720624711s: waiting for domain to come up
	I0510 17:39:41.647711 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:41.648298 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:41.648381 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:41.648284 1173020 retry.go:31] will retry after 2.214923221s: waiting for domain to come up
	I0510 17:39:43.865667 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:43.866173 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:43.866202 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:43.866128 1173020 retry.go:31] will retry after 2.343225628s: waiting for domain to come up
	I0510 17:39:46.211369 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:46.211840 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:46.211874 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:46.211778 1173020 retry.go:31] will retry after 3.192384897s: waiting for domain to come up
	I0510 17:39:49.408277 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:49.408735 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:49.408762 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:49.408702 1173020 retry.go:31] will retry after 4.135723361s: waiting for domain to come up
	I0510 17:39:53.547776 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.548260 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has current primary IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.548282 1172998 main.go:141] libmachine: (addons-661496) found domain IP: 192.168.39.168
	I0510 17:39:53.548296 1172998 main.go:141] libmachine: (addons-661496) reserving static IP address...
	I0510 17:39:53.548665 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find host DHCP lease matching {name: "addons-661496", mac: "52:54:00:9e:78:fe", ip: "192.168.39.168"} in network mk-addons-661496
	I0510 17:39:53.621940 1172998 main.go:141] libmachine: (addons-661496) DBG | Getting to WaitForSSH function...
	I0510 17:39:53.621976 1172998 main.go:141] libmachine: (addons-661496) reserved static IP address 192.168.39.168 for domain addons-661496
	I0510 17:39:53.621989 1172998 main.go:141] libmachine: (addons-661496) waiting for SSH...
	I0510 17:39:53.624195 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.624576 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.624602 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.624739 1172998 main.go:141] libmachine: (addons-661496) DBG | Using SSH client type: external
	I0510 17:39:53.624785 1172998 main.go:141] libmachine: (addons-661496) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa (-rw-------)
	I0510 17:39:53.624824 1172998 main.go:141] libmachine: (addons-661496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 17:39:53.624847 1172998 main.go:141] libmachine: (addons-661496) DBG | About to run SSH command:
	I0510 17:39:53.624878 1172998 main.go:141] libmachine: (addons-661496) DBG | exit 0
	I0510 17:39:53.752178 1172998 main.go:141] libmachine: (addons-661496) DBG | SSH cmd err, output: <nil>: 
	I0510 17:39:53.752500 1172998 main.go:141] libmachine: (addons-661496) KVM machine creation complete
	I0510 17:39:53.752843 1172998 main.go:141] libmachine: (addons-661496) Calling .GetConfigRaw
	I0510 17:39:53.753423 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:53.753641 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:53.753768 1172998 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 17:39:53.753781 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:39:53.754940 1172998 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 17:39:53.754954 1172998 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 17:39:53.754960 1172998 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 17:39:53.754985 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:53.757207 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.757549 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.757576 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.757656 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:53.757806 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.757949 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.758079 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:53.758233 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:53.758480 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:53.758493 1172998 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 17:39:53.867779 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:39:53.867810 1172998 main.go:141] libmachine: Detecting the provisioner...
	I0510 17:39:53.867822 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:53.870400 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.870814 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.870847 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.870977 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:53.871158 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.871337 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.871480 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:53.871639 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:53.871843 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:53.871855 1172998 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 17:39:53.981092 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 17:39:53.981212 1172998 main.go:141] libmachine: found compatible host: buildroot
	I0510 17:39:53.981227 1172998 main.go:141] libmachine: Provisioning with buildroot...
	I0510 17:39:53.981236 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:53.981553 1172998 buildroot.go:166] provisioning hostname "addons-661496"
	I0510 17:39:53.981595 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:53.981769 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:53.984647 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.984964 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.984993 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.985238 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:53.985431 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.985567 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.985685 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:53.985817 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:53.986022 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:53.986034 1172998 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-661496 && echo "addons-661496" | sudo tee /etc/hostname
	I0510 17:39:54.113974 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-661496
	
	I0510 17:39:54.114006 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.116594 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.116890 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.116935 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.117091 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.117307 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.117496 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.117623 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.117769 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:54.118026 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:54.118043 1172998 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-661496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-661496/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-661496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:39:54.234057 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:39:54.234109 1172998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-1165049/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-1165049/.minikube}
	I0510 17:39:54.234140 1172998 buildroot.go:174] setting up certificates
	I0510 17:39:54.234154 1172998 provision.go:84] configureAuth start
	I0510 17:39:54.234169 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:54.234485 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:54.237262 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.237595 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.237620 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.237780 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.240029 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.240418 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.240445 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.240653 1172998 provision.go:143] copyHostCerts
	I0510 17:39:54.240737 1172998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.pem (1078 bytes)
	I0510 17:39:54.240908 1172998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-1165049/.minikube/cert.pem (1123 bytes)
	I0510 17:39:54.240998 1172998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-1165049/.minikube/key.pem (1679 bytes)
	I0510 17:39:54.241057 1172998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca-key.pem org=jenkins.addons-661496 san=[127.0.0.1 192.168.39.168 addons-661496 localhost minikube]
	I0510 17:39:54.335054 1172998 provision.go:177] copyRemoteCerts
	I0510 17:39:54.335129 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:39:54.335159 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.337915 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.338284 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.338317 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.338472 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.338699 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.338886 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.339024 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.423690 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0510 17:39:54.449850 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:39:54.475540 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:39:54.500529 1172998 provision.go:87] duration metric: took 266.357083ms to configureAuth
	I0510 17:39:54.500559 1172998 buildroot.go:189] setting minikube options for container-runtime
	I0510 17:39:54.500728 1172998 config.go:182] Loaded profile config "addons-661496": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:39:54.500751 1172998 main.go:141] libmachine: Checking connection to Docker...
	I0510 17:39:54.500760 1172998 main.go:141] libmachine: (addons-661496) Calling .GetURL
	I0510 17:39:54.502007 1172998 main.go:141] libmachine: (addons-661496) DBG | using libvirt version 6000000
	I0510 17:39:54.504136 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.504491 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.504519 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.504722 1172998 main.go:141] libmachine: Docker is up and running!
	I0510 17:39:54.504743 1172998 main.go:141] libmachine: Reticulating splines...
	I0510 17:39:54.504755 1172998 client.go:171] duration metric: took 23.696803953s to LocalClient.Create
	I0510 17:39:54.504787 1172998 start.go:167] duration metric: took 23.696884418s to libmachine.API.Create "addons-661496"
	I0510 17:39:54.504800 1172998 start.go:293] postStartSetup for "addons-661496" (driver="kvm2")
	I0510 17:39:54.504817 1172998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:39:54.504839 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.505171 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:39:54.505203 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.508540 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.508964 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.508992 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.509202 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.509386 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.509543 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.509705 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.596131 1172998 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:39:54.600398 1172998 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 17:39:54.600439 1172998 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-1165049/.minikube/addons for local assets ...
	I0510 17:39:54.600508 1172998 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-1165049/.minikube/files for local assets ...
	I0510 17:39:54.600534 1172998 start.go:296] duration metric: took 95.72299ms for postStartSetup
	I0510 17:39:54.600581 1172998 main.go:141] libmachine: (addons-661496) Calling .GetConfigRaw
	I0510 17:39:54.601211 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:54.604092 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.604679 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.604705 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.604960 1172998 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/config.json ...
	I0510 17:39:54.605150 1172998 start.go:128] duration metric: took 23.81587997s to createHost
	I0510 17:39:54.605191 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.607726 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.608040 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.608088 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.608244 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.608452 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.608609 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.608767 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.608912 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:54.609145 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:54.609158 1172998 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 17:39:54.717125 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746898794.690126450
	
	I0510 17:39:54.717156 1172998 fix.go:216] guest clock: 1746898794.690126450
	I0510 17:39:54.717164 1172998 fix.go:229] Guest: 2025-05-10 17:39:54.69012645 +0000 UTC Remote: 2025-05-10 17:39:54.605165793 +0000 UTC m=+23.921804666 (delta=84.960657ms)
	I0510 17:39:54.717186 1172998 fix.go:200] guest clock delta is within tolerance: 84.960657ms
	I0510 17:39:54.717192 1172998 start.go:83] releasing machines lock for "addons-661496", held for 23.928011693s
	I0510 17:39:54.717215 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.717532 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:54.720203 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.720567 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.720587 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.720745 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.721284 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.721462 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.721577 1172998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:39:54.721624 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.721743 1172998 ssh_runner.go:195] Run: cat /version.json
	I0510 17:39:54.721769 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.724329 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724395 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724682 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.724711 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724744 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.724761 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724860 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.724972 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.725067 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.725125 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.725213 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.725287 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.725377 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.725445 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.836072 1172998 ssh_runner.go:195] Run: systemctl --version
	I0510 17:39:54.841649 1172998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 17:39:54.846847 1172998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 17:39:54.846928 1172998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:39:54.864868 1172998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 17:39:54.864900 1172998 start.go:495] detecting cgroup driver to use...
	I0510 17:39:54.864981 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0510 17:39:54.896622 1172998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0510 17:39:54.910331 1172998 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:39:54.910424 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:39:54.924872 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:39:54.939184 1172998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:39:55.070575 1172998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:39:55.203182 1172998 docker.go:241] disabling docker service ...
	I0510 17:39:55.203295 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:39:55.218970 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:39:55.233309 1172998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:39:55.415615 1172998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:39:55.547476 1172998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:39:55.561319 1172998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:39:55.581380 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0510 17:39:55.591849 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0510 17:39:55.602830 1172998 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0510 17:39:55.602900 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0510 17:39:55.613712 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0510 17:39:55.624676 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0510 17:39:55.636130 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0510 17:39:55.647294 1172998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:39:55.658559 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0510 17:39:55.669462 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0510 17:39:55.680091 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0510 17:39:55.690979 1172998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:39:55.699923 1172998 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 17:39:55.699992 1172998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 17:39:55.712849 1172998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:39:55.722530 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:39:55.853198 1172998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0510 17:39:55.885576 1172998 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0510 17:39:55.885665 1172998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0510 17:39:55.889889 1172998 retry.go:31] will retry after 1.227640556s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0510 17:39:57.118342 1172998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0510 17:39:57.123859 1172998 start.go:563] Will wait 60s for crictl version
	I0510 17:39:57.123940 1172998 ssh_runner.go:195] Run: which crictl
	I0510 17:39:57.127736 1172998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:39:57.170227 1172998 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0510 17:39:57.170314 1172998 ssh_runner.go:195] Run: containerd --version
	I0510 17:39:57.193745 1172998 ssh_runner.go:195] Run: containerd --version
	I0510 17:39:57.216797 1172998 out.go:177] * Preparing Kubernetes v1.33.0 on containerd 1.7.23 ...
	I0510 17:39:57.218232 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:57.221128 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:57.221479 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:57.221509 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:57.221671 1172998 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 17:39:57.225714 1172998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:39:57.238746 1172998 kubeadm.go:875] updating cluster {Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.
0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:39:57.238855 1172998 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime containerd
	I0510 17:39:57.238909 1172998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:39:57.269645 1172998 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.0". assuming images are not preloaded.
	I0510 17:39:57.269754 1172998 ssh_runner.go:195] Run: which lz4
	I0510 17:39:57.273673 1172998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 17:39:57.277866 1172998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 17:39:57.277897 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412760592 bytes)
	I0510 17:39:58.495413 1172998 containerd.go:563] duration metric: took 1.221786307s to copy over tarball
	I0510 17:39:58.495486 1172998 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 17:40:00.411079 1172998 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.915560713s)
	I0510 17:40:00.411123 1172998 containerd.go:570] duration metric: took 1.915678216s to extract the tarball
	I0510 17:40:00.411135 1172998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 17:40:00.449462 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:40:00.591311 1172998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0510 17:40:00.626460 1172998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:40:00.676302 1172998 retry.go:31] will retry after 146.30517ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-05-10T17:40:00Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0510 17:40:00.823262 1172998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:40:00.860235 1172998 containerd.go:627] all images are preloaded for containerd runtime.
	I0510 17:40:00.860266 1172998 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:40:00.860281 1172998 kubeadm.go:926] updating node { 192.168.39.168 8443 v1.33.0 containerd true true} ...
	I0510 17:40:00.860447 1172998 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-661496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:40:00.860520 1172998 ssh_runner.go:195] Run: sudo crictl info
	I0510 17:40:00.894826 1172998 cni.go:84] Creating CNI manager for ""
	I0510 17:40:00.894854 1172998 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:40:00.894865 1172998 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 17:40:00.894887 1172998 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-661496 NodeName:addons-661496 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:40:00.895003 1172998 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-661496"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.168"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:40:00.895087 1172998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:40:00.908329 1172998 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:40:00.908412 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:40:00.919246 1172998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0510 17:40:00.937996 1172998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:40:00.956415 1172998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0510 17:40:00.974702 1172998 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0510 17:40:00.978400 1172998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:40:00.991443 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:40:01.127827 1172998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:40:01.157295 1172998 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496 for IP: 192.168.39.168
	I0510 17:40:01.157341 1172998 certs.go:194] generating shared ca certs ...
	I0510 17:40:01.157367 1172998 certs.go:226] acquiring lock for ca certs: {Name:mk7942eb7613cd1b5cd28fde706e9943dadc4445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:01.157557 1172998 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key
	I0510 17:40:02.028851 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt ...
	I0510 17:40:02.028885 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt: {Name:mk5ebf958cd39484a03f4716b32fa9f4828e8749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.029112 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key ...
	I0510 17:40:02.029227 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key: {Name:mk4211e8556b6df47299b54db279621eed96de58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.029425 1172998 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key
	I0510 17:40:02.127880 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.crt ...
	I0510 17:40:02.127916 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.crt: {Name:mk28239bcb974f081392efd547f702f946f7c7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.128129 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key ...
	I0510 17:40:02.128145 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key: {Name:mk2dbb6673c0b09dac77a81715c8449b9119dd34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.128259 1172998 certs.go:256] generating profile certs ...
	I0510 17:40:02.128328 1172998 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.key
	I0510 17:40:02.128345 1172998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt with IP's: []
	I0510 17:40:02.770121 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt ...
	I0510 17:40:02.770166 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: {Name:mk5605ed7493b5cf3448d4e4ad6ad143470a92d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.770372 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.key ...
	I0510 17:40:02.770386 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.key: {Name:mka2b295ec69120c17a47a8dc487e313fb162658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.770470 1172998 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d
	I0510 17:40:02.770492 1172998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168]
	I0510 17:40:03.260761 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d ...
	I0510 17:40:03.260798 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d: {Name:mk8f5cb5f23362e694715c1d70642a0a777ecafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.260966 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d ...
	I0510 17:40:03.260979 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d: {Name:mkeadb9846f0e1676f0f96179d337fe535471558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.261053 1172998 certs.go:381] copying /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d -> /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt
	I0510 17:40:03.261124 1172998 certs.go:385] copying /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d -> /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key
	I0510 17:40:03.261177 1172998 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key
	I0510 17:40:03.261195 1172998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt with IP's: []
	I0510 17:40:03.886950 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt ...
	I0510 17:40:03.886986 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt: {Name:mkfec4fdcc46584efd5d0043ad841b8e7cc4bc42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.887173 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key ...
	I0510 17:40:03.887186 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key: {Name:mke3c5e4245a18f9aaa36ef8c4cdebf12a7b1abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.887369 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:40:03.887411 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:40:03.887433 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:40:03.887454 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/key.pem (1679 bytes)
	I0510 17:40:03.888191 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:40:03.916833 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:40:03.943310 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:40:03.969821 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 17:40:03.997414 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0510 17:40:04.025343 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0510 17:40:04.052855 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:40:04.080081 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:40:04.107449 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:40:04.135126 1172998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:40:04.153955 1172998 ssh_runner.go:195] Run: openssl version
	I0510 17:40:04.159879 1172998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:40:04.171789 1172998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:40:04.176464 1172998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:40:04.176534 1172998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:40:04.183117 1172998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:40:04.195686 1172998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:40:04.200042 1172998 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 17:40:04.200101 1172998 kubeadm.go:392] StartCluster: {Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 C
lusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:40:04.200222 1172998 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0510 17:40:04.200319 1172998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:40:04.235324 1172998 cri.go:89] found id: ""
	I0510 17:40:04.235423 1172998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:40:04.247260 1172998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 17:40:04.258635 1172998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 17:40:04.270480 1172998 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 17:40:04.270508 1172998 kubeadm.go:157] found existing configuration files:
	
	I0510 17:40:04.270572 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 17:40:04.281640 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 17:40:04.281714 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 17:40:04.292607 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 17:40:04.303437 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 17:40:04.303539 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 17:40:04.314398 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 17:40:04.324904 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 17:40:04.324986 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 17:40:04.335708 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 17:40:04.345884 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 17:40:04.345963 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 17:40:04.356965 1172998 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 17:40:04.511871 1172998 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 17:40:16.470240 1172998 kubeadm.go:310] [init] Using Kubernetes version: v1.33.0
	I0510 17:40:16.470328 1172998 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 17:40:16.470431 1172998 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 17:40:16.470586 1172998 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 17:40:16.470731 1172998 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0510 17:40:16.470814 1172998 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 17:40:16.472503 1172998 out.go:235]   - Generating certificates and keys ...
	I0510 17:40:16.472603 1172998 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 17:40:16.472676 1172998 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 17:40:16.472809 1172998 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0510 17:40:16.472911 1172998 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0510 17:40:16.472991 1172998 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0510 17:40:16.473071 1172998 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0510 17:40:16.473157 1172998 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0510 17:40:16.473350 1172998 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-661496 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0510 17:40:16.473437 1172998 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0510 17:40:16.473641 1172998 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-661496 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0510 17:40:16.473764 1172998 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0510 17:40:16.473840 1172998 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0510 17:40:16.473883 1172998 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0510 17:40:16.473933 1172998 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 17:40:16.473975 1172998 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 17:40:16.474027 1172998 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0510 17:40:16.474080 1172998 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 17:40:16.474166 1172998 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 17:40:16.474255 1172998 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 17:40:16.474384 1172998 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 17:40:16.474456 1172998 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 17:40:16.475885 1172998 out.go:235]   - Booting up control plane ...
	I0510 17:40:16.475990 1172998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 17:40:16.476075 1172998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 17:40:16.476193 1172998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 17:40:16.476297 1172998 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 17:40:16.476390 1172998 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 17:40:16.476429 1172998 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 17:40:16.476581 1172998 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0510 17:40:16.476672 1172998 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0510 17:40:16.476760 1172998 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001254777s
	I0510 17:40:16.476843 1172998 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0510 17:40:16.476909 1172998 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.168:8443/livez
	I0510 17:40:16.476998 1172998 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0510 17:40:16.477065 1172998 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0510 17:40:16.477139 1172998 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.232300516s
	I0510 17:40:16.477235 1172998 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.909365289s
	I0510 17:40:16.477341 1172998 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.002606208s
	I0510 17:40:16.477518 1172998 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0510 17:40:16.477719 1172998 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0510 17:40:16.477806 1172998 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0510 17:40:16.478078 1172998 kubeadm.go:310] [mark-control-plane] Marking the node addons-661496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0510 17:40:16.478129 1172998 kubeadm.go:310] [bootstrap-token] Using token: kf8nq8.faatt9qa2ldbhogm
	I0510 17:40:16.479704 1172998 out.go:235]   - Configuring RBAC rules ...
	I0510 17:40:16.479800 1172998 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0510 17:40:16.479877 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0510 17:40:16.480043 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0510 17:40:16.480185 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0510 17:40:16.480337 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0510 17:40:16.480430 1172998 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0510 17:40:16.480535 1172998 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0510 17:40:16.480574 1172998 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0510 17:40:16.480612 1172998 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0510 17:40:16.480618 1172998 kubeadm.go:310] 
	I0510 17:40:16.480673 1172998 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0510 17:40:16.480680 1172998 kubeadm.go:310] 
	I0510 17:40:16.480749 1172998 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0510 17:40:16.480755 1172998 kubeadm.go:310] 
	I0510 17:40:16.480777 1172998 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0510 17:40:16.480839 1172998 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0510 17:40:16.480885 1172998 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0510 17:40:16.480891 1172998 kubeadm.go:310] 
	I0510 17:40:16.480936 1172998 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0510 17:40:16.480945 1172998 kubeadm.go:310] 
	I0510 17:40:16.480992 1172998 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0510 17:40:16.480998 1172998 kubeadm.go:310] 
	I0510 17:40:16.481041 1172998 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0510 17:40:16.481104 1172998 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0510 17:40:16.481184 1172998 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0510 17:40:16.481194 1172998 kubeadm.go:310] 
	I0510 17:40:16.481269 1172998 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0510 17:40:16.481339 1172998 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0510 17:40:16.481346 1172998 kubeadm.go:310] 
	I0510 17:40:16.481432 1172998 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kf8nq8.faatt9qa2ldbhogm \
	I0510 17:40:16.481525 1172998 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ffe61921efc4d62c2b0265e9e4d4ecc78e39339829cff2fd65f8ba0081188365 \
	I0510 17:40:16.481548 1172998 kubeadm.go:310] 	--control-plane 
	I0510 17:40:16.481553 1172998 kubeadm.go:310] 
	I0510 17:40:16.481627 1172998 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0510 17:40:16.481634 1172998 kubeadm.go:310] 
	I0510 17:40:16.481702 1172998 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kf8nq8.faatt9qa2ldbhogm \
	I0510 17:40:16.481814 1172998 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ffe61921efc4d62c2b0265e9e4d4ecc78e39339829cff2fd65f8ba0081188365 
	I0510 17:40:16.481828 1172998 cni.go:84] Creating CNI manager for ""
	I0510 17:40:16.481835 1172998 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:40:16.483427 1172998 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0510 17:40:16.484586 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0510 17:40:16.497436 1172998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0510 17:40:16.523293 1172998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 17:40:16.523387 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:16.523448 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-661496 minikube.k8s.io/updated_at=2025_05_10T17_40_16_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4 minikube.k8s.io/name=addons-661496 minikube.k8s.io/primary=true
	I0510 17:40:16.565710 1172998 ops.go:34] apiserver oom_adj: -16
	I0510 17:40:16.680542 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:17.180839 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:17.681443 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:18.180793 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:18.680685 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:19.181602 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:19.681072 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:20.180878 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:20.277992 1172998 kubeadm.go:1105] duration metric: took 3.754682071s to wait for elevateKubeSystemPrivileges
	I0510 17:40:20.278037 1172998 kubeadm.go:394] duration metric: took 16.077940348s to StartCluster
	I0510 17:40:20.278063 1172998 settings.go:142] acquiring lock: {Name:mk469c480b22625281eadd5ebdc6a04348599b1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:20.278227 1172998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:40:20.278842 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/kubeconfig: {Name:mk677f0619615b74c93431771f158c6db83d5db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:20.279095 1172998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0510 17:40:20.279139 1172998 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0510 17:40:20.279302 1172998 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0510 17:40:20.279392 1172998 config.go:182] Loaded profile config "addons-661496": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:40:20.279446 1172998 addons.go:69] Setting ingress-dns=true in profile "addons-661496"
	I0510 17:40:20.279455 1172998 addons.go:69] Setting inspektor-gadget=true in profile "addons-661496"
	I0510 17:40:20.279471 1172998 addons.go:238] Setting addon inspektor-gadget=true in "addons-661496"
	I0510 17:40:20.279482 1172998 addons.go:69] Setting default-storageclass=true in profile "addons-661496"
	I0510 17:40:20.279501 1172998 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-661496"
	I0510 17:40:20.279561 1172998 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-661496"
	I0510 17:40:20.279563 1172998 addons.go:69] Setting storage-provisioner=true in profile "addons-661496"
	I0510 17:40:20.279582 1172998 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-661496"
	I0510 17:40:20.279594 1172998 addons.go:238] Setting addon storage-provisioner=true in "addons-661496"
	I0510 17:40:20.279602 1172998 addons.go:69] Setting cloud-spanner=true in profile "addons-661496"
	I0510 17:40:20.279629 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279628 1172998 addons.go:69] Setting volcano=true in profile "addons-661496"
	I0510 17:40:20.279647 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279652 1172998 addons.go:238] Setting addon cloud-spanner=true in "addons-661496"
	I0510 17:40:20.279663 1172998 addons.go:238] Setting addon volcano=true in "addons-661496"
	I0510 17:40:20.279636 1172998 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-661496"
	I0510 17:40:20.279681 1172998 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-661496"
	I0510 17:40:20.279689 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279693 1172998 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-661496"
	I0510 17:40:20.279707 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279724 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279736 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279472 1172998 addons.go:238] Setting addon ingress-dns=true in "addons-661496"
	I0510 17:40:20.279781 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279518 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279445 1172998 addons.go:69] Setting yakd=true in profile "addons-661496"
	I0510 17:40:20.280186 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280195 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280205 1172998 addons.go:238] Setting addon yakd=true in "addons-661496"
	I0510 17:40:20.280213 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.279522 1172998 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-661496"
	I0510 17:40:20.280224 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.279537 1172998 addons.go:69] Setting volumesnapshots=true in profile "addons-661496"
	I0510 17:40:20.280241 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280246 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280252 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280260 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.279548 1172998 addons.go:69] Setting metrics-server=true in profile "addons-661496"
	I0510 17:40:20.280268 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280277 1172998 addons.go:238] Setting addon metrics-server=true in "addons-661496"
	I0510 17:40:20.280291 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.280467 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280498 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280572 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280603 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.281085 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.281161 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.281367 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.281417 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.281508 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.281550 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280230 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.279528 1172998 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-661496"
	I0510 17:40:20.282484 1172998 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-661496"
	I0510 17:40:20.282963 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.283114 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280232 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.285028 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.286020 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.297056 1172998 out.go:177] * Verifying Kubernetes components...
	I0510 17:40:20.279531 1172998 addons.go:69] Setting gcp-auth=true in profile "addons-661496"
	I0510 17:40:20.297550 1172998 mustload.go:65] Loading cluster: addons-661496
	I0510 17:40:20.297853 1172998 config.go:182] Loaded profile config "addons-661496": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:40:20.298334 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.298540 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.298838 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:40:20.279539 1172998 addons.go:69] Setting ingress=true in profile "addons-661496"
	I0510 17:40:20.299157 1172998 addons.go:238] Setting addon ingress=true in "addons-661496"
	I0510 17:40:20.299239 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.299776 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.299915 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.307215 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0510 17:40:20.312444 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0510 17:40:20.280263 1172998 addons.go:238] Setting addon volumesnapshots=true in "addons-661496"
	I0510 17:40:20.313615 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279550 1172998 addons.go:69] Setting registry=true in profile "addons-661496"
	I0510 17:40:20.313823 1172998 addons.go:238] Setting addon registry=true in "addons-661496"
	I0510 17:40:20.313871 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.314084 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.314305 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.314391 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.314508 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.317106 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0510 17:40:20.317427 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0510 17:40:20.317694 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.318125 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.318226 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.318311 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.321417 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.321444 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.321927 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.322699 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.323609 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.323634 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.323991 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.324064 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.324091 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.324189 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.324415 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0510 17:40:20.325051 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.325082 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.325052 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.325096 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.325215 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.325811 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.325884 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.326583 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.328654 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.328698 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.328940 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.328958 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.329085 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0510 17:40:20.329266 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.334031 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.334145 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.334725 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0510 17:40:20.335070 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.335083 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.335149 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I0510 17:40:20.335244 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.335293 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.335511 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.336624 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.336667 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.337419 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.337978 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.337996 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.338437 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.338622 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.340760 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.344334 1172998 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-661496"
	I0510 17:40:20.344399 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.344896 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.344945 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.345875 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.345902 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.346545 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.347379 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.347480 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.358036 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0510 17:40:20.358677 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.359323 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.359359 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.359897 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.360799 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.360831 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.366725 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
	I0510 17:40:20.375728 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.376394 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.376432 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.376886 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.377163 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.377736 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0510 17:40:20.377926 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0510 17:40:20.378444 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.379388 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.379928 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.379953 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.380277 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.380299 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.380722 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.380809 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0510 17:40:20.381357 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.381403 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.381987 1172998 addons.go:238] Setting addon default-storageclass=true in "addons-661496"
	I0510 17:40:20.382039 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.382084 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0510 17:40:20.382405 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.382447 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.382524 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0510 17:40:20.382859 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.382968 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.383427 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.383453 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.383846 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.384012 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.384076 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.384873 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45873
	I0510 17:40:20.385047 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.385090 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0510 17:40:20.385558 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.385659 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.386158 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.386186 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.386466 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0510 17:40:20.386599 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.386623 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
	I0510 17:40:20.386815 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.387036 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.387118 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.387126 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.387282 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.387499 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38363
	I0510 17:40:20.387802 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.387836 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.387976 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.387987 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.388138 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.388181 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.388246 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.388310 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.388358 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.388481 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.388531 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.388767 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.388901 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.388914 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.389161 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.389282 1172998 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.1
	I0510 17:40:20.389612 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.389674 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.390550 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.390570 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.391364 1172998 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.33
	I0510 17:40:20.391604 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0510 17:40:20.391758 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.392187 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.392390 1172998 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.1
	I0510 17:40:20.393188 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:40:20.393315 1172998 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0510 17:40:20.393968 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0510 17:40:20.394003 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.393346 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.394442 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.394482 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.395436 1172998 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.1
	I0510 17:40:20.395527 1172998 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:40:20.395543 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:40:20.395562 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.393365 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.397059 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0510 17:40:20.398000 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0510 17:40:20.398892 1172998 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 17:40:20.398910 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0510 17:40:20.398933 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.399313 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.399392 1172998 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0510 17:40:20.399407 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480231 bytes)
	I0510 17:40:20.399435 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.401697 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.401721 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.402992 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403009 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403112 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.403334 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.403406 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.403428 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403537 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.403835 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.403855 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.403878 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.403889 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403917 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.404011 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.404353 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.404402 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.404529 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I0510 17:40:20.404570 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.404640 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.404684 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.404697 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.404823 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.404920 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.405006 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.405894 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0510 17:40:20.406011 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0510 17:40:20.408472 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.408479 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0510 17:40:20.408961 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.408990 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409003 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.409064 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409093 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409142 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409240 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.409551 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.409584 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.409616 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.409642 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.409657 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.409656 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.409737 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409868 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.409871 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.409923 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.410161 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.410229 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.410278 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.410429 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.410449 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.410553 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.410558 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.410671 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.410678 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.410915 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.410955 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.411008 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.411013 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.411207 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.411392 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.412038 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.412047 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.412057 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.412058 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.412850 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.412857 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.412909 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.413297 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.413369 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.413410 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.413462 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.413492 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.414732 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.414744 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.416214 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.416435 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.416532 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0510 17:40:20.416660 1172998 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.1
	I0510 17:40:20.416863 1172998 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0510 17:40:20.418146 1172998 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 17:40:20.418168 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0510 17:40:20.418187 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.418744 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0510 17:40:20.418822 1172998 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.39.0
	I0510 17:40:20.418942 1172998 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 17:40:20.418956 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0510 17:40:20.418974 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.420226 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0510 17:40:20.420244 1172998 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0510 17:40:20.420266 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.420588 1172998 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0510 17:40:20.420615 1172998 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0510 17:40:20.420633 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.422504 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.423079 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.423109 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.423143 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0510 17:40:20.423447 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.423682 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.423866 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.424033 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.425508 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0510 17:40:20.425731 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.426377 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.426405 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.426756 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.426793 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.427321 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.427342 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.427724 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.427751 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.427867 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0510 17:40:20.427980 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.428168 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.428284 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.428344 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.428355 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.428486 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.428501 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.428535 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.428634 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.428633 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.428633 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.429197 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.430428 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0510 17:40:20.431635 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0510 17:40:20.432870 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0510 17:40:20.432912 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0510 17:40:20.433535 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.433692 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0510 17:40:20.434514 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.434574 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.434677 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.435237 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.435256 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.435240 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0510 17:40:20.435683 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.435755 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.435877 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.436387 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.436412 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.436604 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0510 17:40:20.436622 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0510 17:40:20.436643 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.439003 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.439081 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42383
	I0510 17:40:20.439644 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.439756 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0510 17:40:20.440099 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.440126 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.440561 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.440612 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.440825 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.441373 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.441396 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.441510 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0510 17:40:20.441632 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.441860 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.442008 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.442035 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.442067 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.442250 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.442634 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.442876 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.443039 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.443661 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.443962 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:40:20.445249 1172998 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0510 17:40:20.446342 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:40:20.447435 1172998 out.go:177]   - Using image docker.io/busybox:stable
	I0510 17:40:20.447660 1172998 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 17:40:20.447677 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0510 17:40:20.447698 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.447805 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
	I0510 17:40:20.448374 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.449196 1172998 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 17:40:20.449216 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0510 17:40:20.449877 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.450851 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.450871 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.451658 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.451749 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.451796 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.452349 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.452383 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.452403 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.452621 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.452961 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.453177 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.454153 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.454425 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.454773 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.454799 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.455035 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.455208 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.455407 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.455553 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.456199 1172998 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0510 17:40:20.457564 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0510 17:40:20.457586 1172998 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0510 17:40:20.457607 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.460253 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0510 17:40:20.461030 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.461140 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.461516 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0510 17:40:20.461657 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.461679 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.461721 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.461753 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.461948 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.462131 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.462139 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.462138 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.462343 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.462393 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.462499 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.462655 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.462685 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.463058 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.463246 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.464706 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.464956 1172998 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:40:20.464973 1172998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:40:20.464990 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.465184 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.465981 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0510 17:40:20.466487 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.467031 1172998 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0510 17:40:20.467202 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.467219 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.467599 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.467797 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.468246 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.468465 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:40:20.468481 1172998 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:40:20.468498 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.469212 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.469239 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.469390 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.469761 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.470102 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.470282 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.470439 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.471024 1172998 out.go:177]   - Using image docker.io/registry:3.0.0
	I0510 17:40:20.471896 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.472367 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.472400 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.472689 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.472859 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.473043 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.473207 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.473896 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0510 17:40:20.475173 1172998 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0510 17:40:20.475184 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0510 17:40:20.475198 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.478203 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.478661 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.478692 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.478809 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.478937 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.479002 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.479063 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.633896 1172998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0510 17:40:20.657259 1172998 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46478->192.168.39.168:22: read: connection reset by peer
	I0510 17:40:20.657301 1172998 retry.go:31] will retry after 243.584195ms: ssh: handshake failed: read tcp 192.168.39.1:46478->192.168.39.168:22: read: connection reset by peer
	W0510 17:40:20.657387 1172998 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46492->192.168.39.168:22: read: connection reset by peer
	I0510 17:40:20.657397 1172998 retry.go:31] will retry after 192.996834ms: ssh: handshake failed: read tcp 192.168.39.1:46492->192.168.39.168:22: read: connection reset by peer
	I0510 17:40:20.662103 1172998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:40:20.983386 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0510 17:40:20.985179 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:40:21.064550 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 17:40:21.123191 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0510 17:40:21.144014 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 17:40:21.169058 1172998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0510 17:40:21.169094 1172998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0510 17:40:21.255187 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 17:40:21.263131 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:40:21.263156 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0510 17:40:21.267100 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0510 17:40:21.267125 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0510 17:40:21.287558 1172998 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0510 17:40:21.287582 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0510 17:40:21.417189 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 17:40:21.486484 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 17:40:21.500434 1172998 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0510 17:40:21.500465 1172998 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0510 17:40:21.604959 1172998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0510 17:40:21.604995 1172998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0510 17:40:21.725738 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0510 17:40:21.726475 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:40:21.796808 1172998 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0510 17:40:21.796839 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0510 17:40:21.871014 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:40:21.871043 1172998 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:40:21.963683 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0510 17:40:21.963713 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0510 17:40:21.996922 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0510 17:40:21.996950 1172998 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0510 17:40:22.342513 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0510 17:40:22.342542 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0510 17:40:22.356535 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:40:22.356560 1172998 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 17:40:22.362837 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0510 17:40:22.366737 1172998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0510 17:40:22.366772 1172998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0510 17:40:22.417129 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0510 17:40:22.417169 1172998 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0510 17:40:22.585777 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0510 17:40:22.585813 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0510 17:40:22.679690 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:40:22.682372 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0510 17:40:22.682392 1172998 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0510 17:40:22.731527 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0510 17:40:22.731571 1172998 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0510 17:40:22.751903 1172998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.117957188s)
	I0510 17:40:22.751947 1172998 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0510 17:40:22.751969 1172998 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.089837396s)
	I0510 17:40:22.752889 1172998 node_ready.go:35] waiting up to 6m0s for node "addons-661496" to be "Ready" ...
	I0510 17:40:22.761770 1172998 node_ready.go:49] node "addons-661496" is "Ready"
	I0510 17:40:22.761802 1172998 node_ready.go:38] duration metric: took 8.883307ms for node "addons-661496" to be "Ready" ...
	I0510 17:40:22.761819 1172998 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:40:22.761884 1172998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:40:23.014160 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0510 17:40:23.014191 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0510 17:40:23.257564 1172998 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-661496" context rescaled to 1 replicas
	I0510 17:40:23.318996 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0510 17:40:23.319027 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0510 17:40:23.382368 1172998 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:40:23.382397 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0510 17:40:23.529409 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0510 17:40:23.529438 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0510 17:40:23.720142 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0510 17:40:23.732745 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:40:23.786281 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0510 17:40:23.786321 1172998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0510 17:40:24.129437 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0510 17:40:24.129471 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0510 17:40:24.426077 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.442645042s)
	I0510 17:40:24.426137 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:24.426151 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:24.426622 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:24.426670 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:24.426691 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:24.426694 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:24.426705 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:24.427055 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:24.427072 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:24.524010 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0510 17:40:24.524037 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0510 17:40:25.051509 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 17:40:25.051541 1172998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0510 17:40:25.179711 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 17:40:25.348066 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.362838877s)
	I0510 17:40:25.348138 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348173 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.348149 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.283566236s)
	I0510 17:40:25.348221 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348238 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.348546 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.348557 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.348564 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:25.348574 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348582 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.348582 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.348608 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.348623 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:25.348638 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348646 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.349047 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.349050 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.349058 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:25.349057 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.349047 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.349070 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:27.475958 1172998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0510 17:40:27.476001 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:27.480092 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:27.480622 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:27.480645 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:27.480872 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:27.481117 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:27.481308 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:27.481495 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:28.079226 1172998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0510 17:40:28.290769 1172998 addons.go:238] Setting addon gcp-auth=true in "addons-661496"
	I0510 17:40:28.290863 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:28.291335 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:28.291385 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:28.309401 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I0510 17:40:28.309915 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:28.310464 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:28.310495 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:28.310895 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:28.311526 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:28.311565 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:28.327688 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I0510 17:40:28.328219 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:28.328749 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:28.328781 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:28.329175 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:28.329396 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:28.331278 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:28.331545 1172998 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0510 17:40:28.331578 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:28.334625 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:28.335054 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:28.335087 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:28.335372 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:28.335576 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:28.335781 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:28.335938 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:32.670911 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.547675412s)
	I0510 17:40:32.670959 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.526904485s)
	I0510 17:40:32.670989 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671004 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671008 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (11.415793955s)
	I0510 17:40:32.671035 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671043 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671014 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671006 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671125 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.253908737s)
	I0510 17:40:32.671231 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671245 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671257 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.184732834s)
	I0510 17:40:32.671292 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671305 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671425 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.945652272s)
	I0510 17:40:32.671441 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671449 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671517 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.94502032s)
	I0510 17:40:32.671533 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671541 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671581 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.30871244s)
	I0510 17:40:32.671672 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.671674 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.671685 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.671693 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671700 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671715 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.671723 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.671732 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671738 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671797 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.671803 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.671812 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671821 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671971 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672000 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672014 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.992287254s)
	I0510 17:40:32.672047 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672058 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672127 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.672168 1172998 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.910244887s)
	I0510 17:40:32.672177 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672184 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672191 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672191 1172998 api_server.go:72] duration metric: took 12.393014419s to wait for apiserver process to appear ...
	I0510 17:40:32.672197 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672199 1172998 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:40:32.672248 1172998 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0510 17:40:32.672302 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672313 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672321 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672328 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672547 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.672575 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672582 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672590 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672592 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.952397085s)
	I0510 17:40:32.672613 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.672616 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672627 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672637 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672644 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672759 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.939982644s)
	I0510 17:40:32.672806 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	W0510 17:40:32.672807 1172998 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 17:40:32.672837 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672842 1172998 retry.go:31] will retry after 270.785919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 17:40:32.672844 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672597 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672856 1172998 addons.go:479] Verifying addon metrics-server=true in "addons-661496"
	I0510 17:40:32.672926 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672934 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672944 1172998 addons.go:479] Verifying addon ingress=true in "addons-661496"
	I0510 17:40:32.675102 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.675132 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.675138 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.675415 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.675451 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.675459 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.675468 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.675474 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.676284 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676318 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.676324 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.676563 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676587 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.676592 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.676601 1172998 addons.go:479] Verifying addon registry=true in "addons-661496"
	I0510 17:40:32.676668 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.677495 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.677499 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.677525 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.677535 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.677574 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.676687 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.677716 1172998 out.go:177] * Verifying ingress addon...
	I0510 17:40:32.676712 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.677794 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.677818 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.677852 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.677943 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.677979 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.677987 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.676722 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.678013 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.678023 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.678030 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.678071 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676727 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676743 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.678176 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.678189 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.678215 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.678239 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.678529 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.677615 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.679629 1172998 out.go:177] * Verifying registry addon...
	I0510 17:40:32.679674 1172998 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0510 17:40:32.679975 1172998 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-661496 service yakd-dashboard -n yakd-dashboard
	
	I0510 17:40:32.681581 1172998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0510 17:40:32.710585 1172998 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0510 17:40:32.715250 1172998 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0510 17:40:32.715275 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:32.715409 1172998 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0510 17:40:32.715436 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:32.733284 1172998 api_server.go:141] control plane version: v1.33.0
	I0510 17:40:32.733338 1172998 api_server.go:131] duration metric: took 61.110993ms to wait for apiserver health ...
	I0510 17:40:32.733353 1172998 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:40:32.769273 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.769301 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.769628 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.769646 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.769652 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	W0510 17:40:32.769760 1172998 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0510 17:40:32.782708 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.782729 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.783069 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.783155 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.783171 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.824981 1172998 system_pods.go:59] 17 kube-system pods found
	I0510 17:40:32.825040 1172998 system_pods.go:61] "amd-gpu-device-plugin-v4gbz" [f294f291-744b-4850-90b4-50d91dab8406] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 17:40:32.825051 1172998 system_pods.go:61] "coredns-674b8bbfcf-6m8wh" [de3b4b5b-9d45-48fa-bc02-7e68d0f4a719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:32.825064 1172998 system_pods.go:61] "coredns-674b8bbfcf-tdjvp" [b934ce97-eb9a-44e0-8dce-b5f8bb54f550] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:32.825071 1172998 system_pods.go:61] "csi-hostpath-attacher-0" [85d52be6-3924-4ae3-bad8-06764ecf38a6] Pending
	I0510 17:40:32.825077 1172998 system_pods.go:61] "etcd-addons-661496" [631566ce-1617-43c0-aae6-20963bfed3d4] Running
	I0510 17:40:32.825083 1172998 system_pods.go:61] "kube-apiserver-addons-661496" [8de722c2-b091-489b-9e78-d16d797f7fe7] Running
	I0510 17:40:32.825088 1172998 system_pods.go:61] "kube-controller-manager-addons-661496" [3538f9c0-52da-492c-8efd-07edc4fb3790] Running
	I0510 17:40:32.825098 1172998 system_pods.go:61] "kube-ingress-dns-minikube" [51ebd7c0-1306-4a22-bd30-9f9e94fca514] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 17:40:32.825104 1172998 system_pods.go:61] "kube-proxy-prpxb" [385933ac-4f81-4f5f-a113-9b4ee3a18d3b] Running
	I0510 17:40:32.825110 1172998 system_pods.go:61] "kube-scheduler-addons-661496" [d3a212e1-5d32-4025-bdd7-3dfbe0fb0246] Running
	I0510 17:40:32.825117 1172998 system_pods.go:61] "metrics-server-7fbb699795-5w57m" [2e1beec0-5626-4c7b-88bc-8260d997758b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 17:40:32.825138 1172998 system_pods.go:61] "nvidia-device-plugin-daemonset-j9pr5" [14fb66ef-5095-4274-8657-2c667308fa0d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 17:40:32.825151 1172998 system_pods.go:61] "registry-694bd45846-zdzh4" [4ba351e4-9daa-43da-8b99-54cf78e8b8d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 17:40:32.825166 1172998 system_pods.go:61] "registry-proxy-8pcc7" [b49e8001-c050-47a2-8471-50c2355d968d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 17:40:32.825176 1172998 system_pods.go:61] "snapshot-controller-68b874b76f-88ddv" [21c08b41-9091-4a26-a852-2590e7c0ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:32.825187 1172998 system_pods.go:61] "snapshot-controller-68b874b76f-wz768" [ee20255a-a346-401c-aa60-c4d336342082] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:32.825201 1172998 system_pods.go:61] "storage-provisioner" [4b684c4b-a952-48da-bc38-3f6663c462e7] Running
	I0510 17:40:32.825213 1172998 system_pods.go:74] duration metric: took 91.852459ms to wait for pod list to return data ...
	I0510 17:40:32.825224 1172998 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:40:32.916771 1172998 default_sa.go:45] found service account: "default"
	I0510 17:40:32.916807 1172998 default_sa.go:55] duration metric: took 91.573454ms for default service account to be created ...
	I0510 17:40:32.916818 1172998 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 17:40:32.944142 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:40:33.008687 1172998 system_pods.go:86] 18 kube-system pods found
	I0510 17:40:33.008733 1172998 system_pods.go:89] "amd-gpu-device-plugin-v4gbz" [f294f291-744b-4850-90b4-50d91dab8406] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 17:40:33.008744 1172998 system_pods.go:89] "coredns-674b8bbfcf-6m8wh" [de3b4b5b-9d45-48fa-bc02-7e68d0f4a719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:33.008757 1172998 system_pods.go:89] "coredns-674b8bbfcf-tdjvp" [b934ce97-eb9a-44e0-8dce-b5f8bb54f550] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:33.008766 1172998 system_pods.go:89] "csi-hostpath-attacher-0" [85d52be6-3924-4ae3-bad8-06764ecf38a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 17:40:33.008771 1172998 system_pods.go:89] "csi-hostpathplugin-q57z4" [f716e96c-2b81-41bd-a505-ef8bab6002bf] Pending
	I0510 17:40:33.008777 1172998 system_pods.go:89] "etcd-addons-661496" [631566ce-1617-43c0-aae6-20963bfed3d4] Running
	I0510 17:40:33.008782 1172998 system_pods.go:89] "kube-apiserver-addons-661496" [8de722c2-b091-489b-9e78-d16d797f7fe7] Running
	I0510 17:40:33.008788 1172998 system_pods.go:89] "kube-controller-manager-addons-661496" [3538f9c0-52da-492c-8efd-07edc4fb3790] Running
	I0510 17:40:33.008798 1172998 system_pods.go:89] "kube-ingress-dns-minikube" [51ebd7c0-1306-4a22-bd30-9f9e94fca514] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 17:40:33.008805 1172998 system_pods.go:89] "kube-proxy-prpxb" [385933ac-4f81-4f5f-a113-9b4ee3a18d3b] Running
	I0510 17:40:33.008812 1172998 system_pods.go:89] "kube-scheduler-addons-661496" [d3a212e1-5d32-4025-bdd7-3dfbe0fb0246] Running
	I0510 17:40:33.008820 1172998 system_pods.go:89] "metrics-server-7fbb699795-5w57m" [2e1beec0-5626-4c7b-88bc-8260d997758b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 17:40:33.008828 1172998 system_pods.go:89] "nvidia-device-plugin-daemonset-j9pr5" [14fb66ef-5095-4274-8657-2c667308fa0d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 17:40:33.008846 1172998 system_pods.go:89] "registry-694bd45846-zdzh4" [4ba351e4-9daa-43da-8b99-54cf78e8b8d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 17:40:33.008857 1172998 system_pods.go:89] "registry-proxy-8pcc7" [b49e8001-c050-47a2-8471-50c2355d968d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 17:40:33.008866 1172998 system_pods.go:89] "snapshot-controller-68b874b76f-88ddv" [21c08b41-9091-4a26-a852-2590e7c0ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:33.008877 1172998 system_pods.go:89] "snapshot-controller-68b874b76f-wz768" [ee20255a-a346-401c-aa60-c4d336342082] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:33.008884 1172998 system_pods.go:89] "storage-provisioner" [4b684c4b-a952-48da-bc38-3f6663c462e7] Running
	I0510 17:40:33.008897 1172998 system_pods.go:126] duration metric: took 92.070192ms to wait for k8s-apps to be running ...
	I0510 17:40:33.008911 1172998 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 17:40:33.008978 1172998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:40:33.299242 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:33.301255 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:33.572025 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.39224087s)
	I0510 17:40:33.572121 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:33.572167 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:33.572147 1172998 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.240568926s)
	I0510 17:40:33.572503 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:33.572519 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:33.572530 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:33.572537 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:33.572784 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:33.572811 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:33.572818 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:33.572829 1172998 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-661496"
	I0510 17:40:33.573838 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:40:33.574625 1172998 out.go:177] * Verifying csi-hostpath-driver addon...
	I0510 17:40:33.575959 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0510 17:40:33.576945 1172998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0510 17:40:33.576963 1172998 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0510 17:40:33.576945 1172998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0510 17:40:33.604900 1172998 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0510 17:40:33.604928 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:33.678097 1172998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0510 17:40:33.678135 1172998 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0510 17:40:33.690417 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:33.697715 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:33.775850 1172998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 17:40:33.775888 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0510 17:40:33.818557 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 17:40:34.080794 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:34.188754 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:34.287745 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:34.580392 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:34.648382 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.704147097s)
	I0510 17:40:34.648441 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.648455 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.648474 1172998 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.639464519s)
	I0510 17:40:34.648516 1172998 system_svc.go:56] duration metric: took 1.63960172s WaitForService to wait for kubelet
	I0510 17:40:34.648534 1172998 kubeadm.go:578] duration metric: took 14.369355476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 17:40:34.648566 1172998 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:40:34.648728 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.648787 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.648809 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.648820 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.648792 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:34.649036 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.649050 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.651756 1172998 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 17:40:34.651778 1172998 node_conditions.go:123] node cpu capacity is 2
	I0510 17:40:34.651792 1172998 node_conditions.go:105] duration metric: took 3.219506ms to run NodePressure ...
	I0510 17:40:34.651809 1172998 start.go:241] waiting for startup goroutines ...
	I0510 17:40:34.692997 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:34.693002 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:34.866950 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.048332101s)
	I0510 17:40:34.867011 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.867027 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.867364 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.867418 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.867432 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.867440 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.867753 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.867773 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.868862 1172998 addons.go:479] Verifying addon gcp-auth=true in "addons-661496"
	I0510 17:40:34.870512 1172998 out.go:177] * Verifying gcp-auth addon...
	I0510 17:40:34.872659 1172998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0510 17:40:34.878541 1172998 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0510 17:40:35.081453 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:35.183209 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:35.184783 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:35.580670 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:35.683704 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:35.684812 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:36.082824 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:36.185508 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:36.185707 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:36.581408 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:36.683243 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:36.684968 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:37.080761 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:37.457827 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:37.457902 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:37.581216 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:37.682920 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:37.684504 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:38.081174 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:38.182925 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:38.184549 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:38.581105 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:38.682786 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:38.684301 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:39.080260 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:39.183474 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:39.185318 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:39.580416 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:39.682986 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:39.685111 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:40.210513 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:40.212177 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:40.212527 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:40.580396 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:40.684105 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:40.684442 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:41.081321 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:41.182910 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:41.184590 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:41.581041 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:41.685519 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:41.685771 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:42.184402 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:42.184945 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:42.188229 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:42.581232 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:42.683324 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:42.684712 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:43.080759 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:43.183956 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:43.185168 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:43.580455 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:43.683440 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:43.685105 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:44.081736 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:44.183242 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:44.184756 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:44.581605 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:44.684445 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:44.684455 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:45.081044 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:45.183075 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:45.184941 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:45.580782 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:45.684011 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:45.685836 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:46.082003 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:46.184338 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:46.185564 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:46.580918 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:46.683848 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:46.684348 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:47.081457 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:47.183481 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:47.185220 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:47.580652 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:47.683701 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:47.685242 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:48.081318 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:48.183534 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:48.185108 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:48.580446 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:48.683352 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:48.684847 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:49.080808 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:49.185978 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:49.186063 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:49.584342 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:49.686888 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:49.688825 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:50.081535 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:50.183951 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:50.184447 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:50.581090 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:50.683127 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:50.684628 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:51.081303 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:51.184532 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:51.185150 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:51.580667 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:51.688888 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:51.688962 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:52.080920 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:52.183735 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:52.184978 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:52.580322 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:52.682937 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:52.684668 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:53.080961 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:53.186964 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:53.187190 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:53.580103 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:53.683050 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:53.684758 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:54.080877 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:54.182733 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:54.184170 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:54.579991 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:54.682922 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:54.684571 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:55.081903 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:55.182588 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:55.184081 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:55.580865 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:55.683187 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:55.684937 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:56.080990 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:56.182641 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:56.184962 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:56.727950 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:56.728357 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:56.729990 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:57.081040 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:57.183015 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:57.185176 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:57.580564 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:57.683267 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:57.684825 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:58.080673 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:58.184002 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:58.184989 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:58.580631 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:58.683345 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:58.684928 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:59.081682 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:59.184431 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:59.186216 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:59.581493 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:59.684058 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:59.685510 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:00.081006 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:00.183105 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:00.185777 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:00.581308 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:00.683205 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:00.684806 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:01.080662 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:01.183645 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:01.184448 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:01.581311 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:01.683246 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:01.685051 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:02.080897 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:02.186491 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:02.186517 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:02.581157 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:02.683671 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:02.686197 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:03.081250 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:03.183794 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:03.185739 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:03.581228 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:03.683265 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:03.684915 1172998 kapi.go:107] duration metric: took 31.003329796s to wait for kubernetes.io/minikube-addons=registry ...
	I0510 17:41:04.080836 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:04.183653 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:04.580453 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:04.683528 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:05.081126 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:05.183202 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:05.581145 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:05.682816 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:06.081140 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:06.183002 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:06.581229 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:06.683152 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:07.080923 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:07.183889 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:07.581046 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:07.684440 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:08.080794 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:08.183839 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:08.580945 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:08.684653 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:09.081789 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:09.183084 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:09.580851 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:09.683870 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:10.080313 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:10.183486 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:10.581525 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:10.683044 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:11.154261 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:11.182912 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:11.581152 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:11.683379 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:12.080719 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:12.183846 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:12.581434 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:12.683161 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:13.081956 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:13.184229 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:13.581549 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:13.683601 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:14.081093 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:14.183600 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:14.580730 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:14.683228 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:15.080339 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:15.183325 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:15.584013 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:15.683555 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:16.080422 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:16.183637 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:16.584050 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:16.683076 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:17.083032 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:17.189566 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:17.585161 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:17.684511 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:18.082658 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:18.183560 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:18.583266 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:18.683327 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:19.080691 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:19.184486 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:19.581623 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:19.683300 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:20.163713 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:20.184042 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:20.586731 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:20.683415 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:21.081576 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:21.183344 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:21.581174 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:21.683565 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:22.080962 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:22.184114 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:22.581341 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:22.683445 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:23.081200 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:23.183571 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:23.581424 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:23.818334 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:24.089098 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:24.184335 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:24.586605 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:24.684842 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:25.085150 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:25.184312 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:25.580768 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:25.682722 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:26.080228 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:26.183157 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:26.580683 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:26.684126 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:27.081132 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:27.182745 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:27.582158 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:27.683913 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:28.081058 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:28.183598 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:28.661956 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:28.682626 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:29.080693 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:29.183676 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:29.580348 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:29.683177 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:30.080532 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:30.183630 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:30.581087 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:30.685322 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:31.081044 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:31.183336 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:31.597518 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:31.683461 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:32.081638 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:32.449842 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:32.580686 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:32.683557 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:33.080613 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:33.183265 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:33.583045 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:33.684425 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:34.080547 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:34.183557 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:34.580536 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:34.684086 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:35.081658 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:35.184914 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:35.588387 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:35.683361 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:36.081303 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:36.183657 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:36.580759 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:36.683721 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:37.080861 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:37.185270 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:37.581853 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:37.682528 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:38.082598 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:38.183867 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:38.580878 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:38.683321 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:39.081294 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:39.183278 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:39.580938 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:39.682981 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:40.081143 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:40.182943 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:40.581046 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:40.683825 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:41.081383 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:41.183183 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:41.580674 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:41.683691 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:42.081064 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:42.184649 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:42.583847 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:42.685032 1172998 kapi.go:107] duration metric: took 1m10.00535342s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0510 17:41:43.087022 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:43.588107 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:44.080701 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:44.582672 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:45.084079 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:45.582703 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:46.080935 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:46.581871 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:47.080356 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:47.581014 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:48.081520 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:48.581818 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:49.081584 1172998 kapi.go:107] duration metric: took 1m15.504636912s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0510 17:41:58.376389 1172998 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0510 17:41:58.376415 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:41:58.876728 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:41:59.376724 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:41:59.876718 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:00.376972 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:00.876779 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:01.376633 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:01.876727 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:02.376605 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:02.876631 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:03.376928 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:03.877261 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:04.375645 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:04.876435 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:05.376016 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:05.876977 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:06.376309 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:06.877422 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:07.375557 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:07.876372 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:08.375429 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:08.875870 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:09.376421 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:09.875946 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:10.376536 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:10.875949 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:11.377634 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:11.876404 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:12.376066 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:12.876469 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:13.376814 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:13.876777 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:14.376536 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:14.876479 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:15.375681 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:15.876228 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:16.376491 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:16.876004 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:17.376729 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:17.876387 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:18.380816 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:18.877810 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:19.376712 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:19.876505 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:20.376270 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:20.877011 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:21.375577 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:21.876815 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:22.376529 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:22.876262 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:23.375848 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:23.876780 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:24.376569 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:24.876787 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:25.376349 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:25.876871 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:26.376573 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:26.876262 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:27.375454 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:27.879734 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:28.376371 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:28.876095 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:29.375776 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:29.876633 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:30.375897 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:30.875942 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:31.376292 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:31.876594 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:32.376110 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:32.876506 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:33.376034 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:33.877017 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:34.376779 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:34.877546 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:35.376400 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:35.876671 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:36.377042 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:36.876370 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:37.375892 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:37.876924 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:38.376710 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:38.876280 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:39.376029 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:39.891401 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:40.376347 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:40.876571 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:41.376070 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:41.882187 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:42.376472 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:42.876607 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:43.375706 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:43.877249 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:44.375979 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:44.877290 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:45.376116 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:45.876724 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:46.376690 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:46.876675 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:47.376285 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:47.876219 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:48.375671 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:48.876330 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:49.376111 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:49.876391 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:50.375642 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:50.875909 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:51.376647 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:51.877246 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:52.375751 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:52.876489 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:53.375914 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:53.877644 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:54.376365 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:54.876457 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:55.376133 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:55.876574 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:56.376953 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:56.877893 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:57.376643 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:57.881047 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:58.375550 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:58.876668 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:59.377069 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:59.876390 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:00.375937 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:00.876746 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:01.376792 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:01.878732 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:02.376717 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:02.877021 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:03.376817 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:03.881208 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:04.376639 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:04.876422 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:05.376519 1172998 kapi.go:107] duration metric: took 2m30.503855969s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0510 17:43:05.378320 1172998 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-661496 cluster.
	I0510 17:43:05.379801 1172998 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0510 17:43:05.381023 1172998 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0510 17:43:05.382555 1172998 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, volcano, metrics-server, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0510 17:43:05.383825 1172998 addons.go:514] duration metric: took 2m45.104557537s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns volcano metrics-server nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0510 17:43:05.383885 1172998 start.go:246] waiting for cluster config update ...
	I0510 17:43:05.383912 1172998 start.go:255] writing updated cluster config ...
	I0510 17:43:05.384286 1172998 ssh_runner.go:195] Run: rm -f paused
	I0510 17:43:05.391228 1172998 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:43:05.395222 1172998 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-6m8wh" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.399776 1172998 pod_ready.go:94] pod "coredns-674b8bbfcf-6m8wh" is "Ready"
	I0510 17:43:05.399799 1172998 pod_ready.go:86] duration metric: took 4.552136ms for pod "coredns-674b8bbfcf-6m8wh" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.401658 1172998 pod_ready.go:83] waiting for pod "etcd-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.406056 1172998 pod_ready.go:94] pod "etcd-addons-661496" is "Ready"
	I0510 17:43:05.406148 1172998 pod_ready.go:86] duration metric: took 4.470056ms for pod "etcd-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.409345 1172998 pod_ready.go:83] waiting for pod "kube-apiserver-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.414276 1172998 pod_ready.go:94] pod "kube-apiserver-addons-661496" is "Ready"
	I0510 17:43:05.414298 1172998 pod_ready.go:86] duration metric: took 4.930227ms for pod "kube-apiserver-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.416686 1172998 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.796365 1172998 pod_ready.go:94] pod "kube-controller-manager-addons-661496" is "Ready"
	I0510 17:43:05.796398 1172998 pod_ready.go:86] duration metric: took 379.688776ms for pod "kube-controller-manager-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.996431 1172998 pod_ready.go:83] waiting for pod "kube-proxy-prpxb" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.395671 1172998 pod_ready.go:94] pod "kube-proxy-prpxb" is "Ready"
	I0510 17:43:06.395705 1172998 pod_ready.go:86] duration metric: took 399.242909ms for pod "kube-proxy-prpxb" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.596013 1172998 pod_ready.go:83] waiting for pod "kube-scheduler-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.995243 1172998 pod_ready.go:94] pod "kube-scheduler-addons-661496" is "Ready"
	I0510 17:43:06.995276 1172998 pod_ready.go:86] duration metric: took 399.231107ms for pod "kube-scheduler-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.995286 1172998 pod_ready.go:40] duration metric: took 1.604022064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:43:07.042926 1172998 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:43:07.044837 1172998 out.go:177] * Done! kubectl is now configured to use "addons-661496" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e46853278a58       56cc512116c8f       8 minutes ago       Running             busybox                   0                   17761bf6d8782       busybox
	2e55dbfa3ad40       ee44bc2368033       10 minutes ago      Running             controller                0                   43f8b636ba100       ingress-nginx-controller-7c9f76cd49-w87h8
	ea099c471e05f       a62eeff05ba51       11 minutes ago      Exited              patch                     1                   aeb99ebb9ef90       ingress-nginx-admission-patch-9d5dm
	69c61255f8eb6       a62eeff05ba51       11 minutes ago      Exited              create                    0                   dd018e9b62112       ingress-nginx-admission-create-fhvxz
	62defc806e3d4       30dd67412fdea       11 minutes ago      Running             minikube-ingress-dns      0                   60e7f8a3fe996       kube-ingress-dns-minikube
	4061d8ab8a59a       d5e667c0f2bb6       11 minutes ago      Running             amd-gpu-device-plugin     0                   36c61f3680774       amd-gpu-device-plugin-v4gbz
	5aa32f181c6fb       6e38f40d628db       12 minutes ago      Running             storage-provisioner       0                   1a635936bed99       storage-provisioner
	96f113e1188db       1cf5f116067c6       12 minutes ago      Running             coredns                   0                   d92079ffb222b       coredns-674b8bbfcf-6m8wh
	0f62645c8df43       f1184a0bd7fe5       12 minutes ago      Running             kube-proxy                0                   6725536a3ce63       kube-proxy-prpxb
	5d41575d5f369       8d72586a76469       12 minutes ago      Running             kube-scheduler            0                   caf411c817260       kube-scheduler-addons-661496
	ffb6f242cbf4c       1d579cb6d6967       12 minutes ago      Running             kube-controller-manager   0                   d1975cce9669e       kube-controller-manager-addons-661496
	b0e9d7bab929d       499038711c081       12 minutes ago      Running             etcd                      0                   77fc9fe62c7a0       etcd-addons-661496
	f0e94709db491       6ba9545b2183e       12 minutes ago      Running             kube-apiserver            0                   36baa3f15bdb7       kube-apiserver-addons-661496
	
	
	==> containerd <==
	May 10 17:47:47 addons-661496 containerd[847]: time="2025-05-10T17:47:47.038284297Z" level=info msg="shim disconnected" id=da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96 namespace=k8s.io
	May 10 17:47:47 addons-661496 containerd[847]: time="2025-05-10T17:47:47.038377964Z" level=warning msg="cleaning up after shim disconnected" id=da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96 namespace=k8s.io
	May 10 17:47:47 addons-661496 containerd[847]: time="2025-05-10T17:47:47.038389592Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 10 17:47:47 addons-661496 containerd[847]: time="2025-05-10T17:47:47.112413815Z" level=info msg="TearDown network for sandbox \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\" successfully"
	May 10 17:47:47 addons-661496 containerd[847]: time="2025-05-10T17:47:47.112513383Z" level=info msg="StopPodSandbox for \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\" returns successfully"
	May 10 17:47:48 addons-661496 containerd[847]: time="2025-05-10T17:47:48.045226289Z" level=info msg="RemoveContainer for \"37626d630e4f1213f46fda0a2691ba107a1c508b9087b39351a5ac0597888ccd\""
	May 10 17:47:48 addons-661496 containerd[847]: time="2025-05-10T17:47:48.056307649Z" level=info msg="RemoveContainer for \"37626d630e4f1213f46fda0a2691ba107a1c508b9087b39351a5ac0597888ccd\" returns successfully"
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.354764237Z" level=info msg="StopPodSandbox for \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\""
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.381923214Z" level=info msg="TearDown network for sandbox \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\" successfully"
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.381965381Z" level=info msg="StopPodSandbox for \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\" returns successfully"
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.382587399Z" level=info msg="RemovePodSandbox for \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\""
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.382815574Z" level=info msg="Forcibly stopping sandbox \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\""
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.403754191Z" level=info msg="TearDown network for sandbox \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\" successfully"
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.411960963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	May 10 17:48:17 addons-661496 containerd[847]: time="2025-05-10T17:48:17.412202207Z" level=info msg="RemovePodSandbox \"da00fa4d62c35e2dcf60774a608708225027fb4bb69f4c6ad0b8596c25f33d96\" returns successfully"
	May 10 17:50:07 addons-661496 containerd[847]: time="2025-05-10T17:50:07.817065663Z" level=info msg="PullImage \"busybox:stable\""
	May 10 17:50:07 addons-661496 containerd[847]: time="2025-05-10T17:50:07.821982008Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:50:08 addons-661496 containerd[847]: time="2025-05-10T17:50:08.430391647Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:50:10 addons-661496 containerd[847]: time="2025-05-10T17:50:10.490056796Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 17:50:10 addons-661496 containerd[847]: time="2025-05-10T17:50:10.490460058Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=21178"
	May 10 17:50:23 addons-661496 containerd[847]: time="2025-05-10T17:50:23.817841059Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	May 10 17:50:23 addons-661496 containerd[847]: time="2025-05-10T17:50:23.820587040Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:50:24 addons-661496 containerd[847]: time="2025-05-10T17:50:24.450194237Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:50:27 addons-661496 containerd[847]: time="2025-05-10T17:50:27.189727840Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 17:50:27 addons-661496 containerd[847]: time="2025-05-10T17:50:27.189855669Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=21300"
	
	
	==> coredns [96f113e1188dbabe774b4d904716ffca7a49f5575a945e1c5a06730298098808] <==
	[INFO] 10.244.0.8:37407 - 1651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000653913s
	[INFO] 10.244.0.8:37407 - 24733 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000110023s
	[INFO] 10.244.0.8:37407 - 35665 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000359567s
	[INFO] 10.244.0.8:37407 - 9024 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000213984s
	[INFO] 10.244.0.8:37407 - 6547 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000185165s
	[INFO] 10.244.0.8:37407 - 55798 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000127176s
	[INFO] 10.244.0.8:37407 - 54927 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000200961s
	[INFO] 10.244.0.8:51946 - 35519 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149475s
	[INFO] 10.244.0.8:51946 - 35794 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00037073s
	[INFO] 10.244.0.8:58194 - 53577 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000230425s
	[INFO] 10.244.0.8:58194 - 53359 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00033287s
	[INFO] 10.244.0.8:53557 - 45474 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102418s
	[INFO] 10.244.0.8:53557 - 45715 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000186201s
	[INFO] 10.244.0.8:39461 - 56505 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000121214s
	[INFO] 10.244.0.8:39461 - 56287 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169677s
	[INFO] 10.244.0.27:54699 - 52656 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000763266s
	[INFO] 10.244.0.27:52320 - 24998 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000586839s
	[INFO] 10.244.0.27:44324 - 8059 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000183014s
	[INFO] 10.244.0.27:35935 - 48888 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000217207s
	[INFO] 10.244.0.27:53338 - 29968 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155345s
	[INFO] 10.244.0.27:47993 - 59292 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000196871s
	[INFO] 10.244.0.27:54710 - 20206 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004822824s
	[INFO] 10.244.0.27:55930 - 47065 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005644589s
	[INFO] 10.244.0.32:58617 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00035506s
	[INFO] 10.244.0.32:59494 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000211617s
	
	
	==> describe nodes <==
	Name:               addons-661496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-661496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=addons-661496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_40_16_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-661496
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:40:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-661496
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:52:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:49:58 +0000   Sat, 10 May 2025 17:40:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:49:58 +0000   Sat, 10 May 2025 17:40:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:49:58 +0000   Sat, 10 May 2025 17:40:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:49:58 +0000   Sat, 10 May 2025 17:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    addons-661496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	System Info:
	  Machine ID:                 35093bd7e517431a9628c06138768a2f
	  System UUID:                35093bd7-e517-431a-9628-c06138768a2f
	  Boot ID:                    d04c19cd-be15-4f30-98c2-b9909b79a3a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  ingress-nginx               ingress-nginx-controller-7c9f76cd49-w87h8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 amd-gpu-device-plugin-v4gbz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-674b8bbfcf-6m8wh                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-661496                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-661496                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-661496        200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-prpxb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-661496                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-661496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-661496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-661496 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-661496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-661496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-661496 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-661496 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-661496 event: Registered Node addons-661496 in Controller
	
	
	==> dmesg <==
	[  +0.000057] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.015220] kauditd_printk_skb: 109 callbacks suppressed
	[  +8.528125] kauditd_printk_skb: 129 callbacks suppressed
	[May10 17:41] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.459069] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.351598] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.526521] kauditd_printk_skb: 45 callbacks suppressed
	[  +0.871595] kauditd_printk_skb: 19 callbacks suppressed
	[  +2.985223] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.010178] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.025628] kauditd_printk_skb: 21 callbacks suppressed
	[May10 17:43] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.301687] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 20 callbacks suppressed
	[May10 17:44] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.969275] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.712489] kauditd_printk_skb: 34 callbacks suppressed
	[  +1.484431] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.163032] kauditd_printk_skb: 15 callbacks suppressed
	[  +3.363842] kauditd_printk_skb: 14 callbacks suppressed
	[  +3.029581] kauditd_printk_skb: 36 callbacks suppressed
	[May10 17:45] kauditd_printk_skb: 7 callbacks suppressed
	[May10 17:47] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [b0e9d7bab929dd860fc9fe1b1ebd5f5ba31e3fd422cfd59a025ce19eace353e2] <==
	{"level":"info","ts":"2025-05-10T17:41:11.147195Z","caller":"traceutil/trace.go:171","msg":"trace[322139401] linearizableReadLoop","detail":"{readStateIndex:1169; appliedIndex:1168; }","duration":"276.724755ms","start":"2025-05-10T17:41:10.870449Z","end":"2025-05-10T17:41:11.147174Z","steps":["trace[322139401] 'read index received'  (duration: 276.550526ms)","trace[322139401] 'applied index is now lower than readState.Index'  (duration: 173.811µs)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T17:41:11.147269Z","caller":"traceutil/trace.go:171","msg":"trace[1632780145] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"331.849089ms","start":"2025-05-10T17:41:10.815414Z","end":"2025-05-10T17:41:11.147263Z","steps":["trace[1632780145] 'process raft request'  (duration: 331.614968ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:11.147375Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:41:10.815400Z","time spent":"331.886483ms","remote":"127.0.0.1:43618","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1140 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-05-10T17:41:11.147561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.102405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:11.147597Z","caller":"traceutil/trace.go:171","msg":"trace[111070395] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1141; }","duration":"277.160719ms","start":"2025-05-10T17:41:10.870428Z","end":"2025-05-10T17:41:11.147589Z","steps":["trace[111070395] 'agreement among raft nodes before linearized reading'  (duration: 277.104791ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:41:14.454502Z","caller":"traceutil/trace.go:171","msg":"trace[1114780893] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"178.979213ms","start":"2025-05-10T17:41:14.275506Z","end":"2025-05-10T17:41:14.454485Z","steps":["trace[1114780893] 'process raft request'  (duration: 178.719391ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:20.157316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.468883ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17287232900780463074 > lease_revoke:<id:6fe896bb487712c3>","response":"size:29"}
	{"level":"info","ts":"2025-05-10T17:41:20.157491Z","caller":"traceutil/trace.go:171","msg":"trace[312139809] linearizableReadLoop","detail":"{readStateIndex:1208; appliedIndex:1207; }","duration":"286.455789ms","start":"2025-05-10T17:41:19.871012Z","end":"2025-05-10T17:41:20.157468Z","steps":["trace[312139809] 'read index received'  (duration: 142.74821ms)","trace[312139809] 'applied index is now lower than readState.Index'  (duration: 143.706455ms)"],"step_count":2}
	{"level":"warn","ts":"2025-05-10T17:41:20.157610Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.603975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:20.157642Z","caller":"traceutil/trace.go:171","msg":"trace[1166239947] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1177; }","duration":"286.661216ms","start":"2025-05-10T17:41:19.870973Z","end":"2025-05-10T17:41:20.157634Z","steps":["trace[1166239947] 'agreement among raft nodes before linearized reading'  (duration: 286.603848ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:23.813100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.505354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:23.813143Z","caller":"traceutil/trace.go:171","msg":"trace[1816674603] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"134.577306ms","start":"2025-05-10T17:41:23.678555Z","end":"2025-05-10T17:41:23.813132Z","steps":["trace[1816674603] 'range keys from in-memory index tree'  (duration: 134.457412ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:32.441803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.46138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:32.442789Z","caller":"traceutil/trace.go:171","msg":"trace[52974085] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"262.516178ms","start":"2025-05-10T17:41:32.180255Z","end":"2025-05-10T17:41:32.442771Z","steps":["trace[52974085] 'range keys from in-memory index tree'  (duration: 261.391539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:32.442613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.488856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-05-10T17:41:32.442648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.249026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-05-10T17:41:32.446649Z","caller":"traceutil/trace.go:171","msg":"trace[2101719547] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1250; }","duration":"228.258228ms","start":"2025-05-10T17:41:32.218374Z","end":"2025-05-10T17:41:32.446632Z","steps":["trace[2101719547] 'count revisions from in-memory index tree'  (duration: 224.207737ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:41:32.446121Z","caller":"traceutil/trace.go:171","msg":"trace[1053638242] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1250; }","duration":"191.014934ms","start":"2025-05-10T17:41:32.255095Z","end":"2025-05-10T17:41:32.446110Z","steps":["trace[1053638242] 'range keys from in-memory index tree'  (duration: 187.308252ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:41:48.358385Z","caller":"traceutil/trace.go:171","msg":"trace[205082752] transaction","detail":"{read_only:false; response_revision:1345; number_of_response:1; }","duration":"279.543397ms","start":"2025-05-10T17:41:48.078826Z","end":"2025-05-10T17:41:48.358369Z","steps":["trace[205082752] 'process raft request'  (duration: 279.443575ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:43:29.998049Z","caller":"traceutil/trace.go:171","msg":"trace[2131074455] transaction","detail":"{read_only:false; response_revision:1613; number_of_response:1; }","duration":"273.363582ms","start":"2025-05-10T17:43:29.724655Z","end":"2025-05-10T17:43:29.998018Z","steps":["trace[2131074455] 'process raft request'  (duration: 272.877413ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:43:38.622297Z","caller":"traceutil/trace.go:171","msg":"trace[1907339207] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1655; }","duration":"226.096167ms","start":"2025-05-10T17:43:38.396185Z","end":"2025-05-10T17:43:38.622281Z","steps":["trace[1907339207] 'process raft request'  (duration: 225.94906ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:43:38.632702Z","caller":"traceutil/trace.go:171","msg":"trace[1703725001] transaction","detail":"{read_only:false; response_revision:1656; number_of_response:1; }","duration":"232.960343ms","start":"2025-05-10T17:43:38.399726Z","end":"2025-05-10T17:43:38.632687Z","steps":["trace[1703725001] 'process raft request'  (duration: 231.799352ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:50:10.900531Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2421}
	{"level":"info","ts":"2025-05-10T17:50:11.047585Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":2421,"took":"145.247113ms","hash":2070793704,"current-db-size-bytes":11632640,"current-db-size":"12 MB","current-db-size-in-use-bytes":3444736,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2025-05-10T17:50:11.047662Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2070793704,"revision":2421,"compact-revision":-1}
	
	
	==> kernel <==
	 17:52:39 up 13 min,  0 user,  load average: 0.07, 0.33, 0.36
	Linux addons-661496 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [f0e94709db4919a868c3abd359869dfc8ae3023971570e7c3cbef8615372a1c1] <==
	I0510 17:44:24.399305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:30.189628       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:32.360130       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:35.746711       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0510 17:44:36.779930       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0510 17:44:37.987030       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0510 17:44:38.180631       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.187.250"}
	I0510 17:44:38.187206       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:46.279208       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0510 17:45:03.407445       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.407816       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.429150       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.429610       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.449742       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.450344       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.476778       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.477089       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.542041       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.542322       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0510 17:45:04.429327       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0510 17:45:04.544379       1 cacher.go:183] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0510 17:45:04.567189       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W0510 17:45:04.627618       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0510 17:45:09.410104       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0510 17:50:12.928374       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ffb6f242cbf4cc00584479d6bca7b87ecafb6910608502978ef070cc8c3ac695] <==
	E0510 17:50:54.573327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:05.228185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:05.988723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:06.730973       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:16.431868       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:23.808590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:29.621059       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:31.792256       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:34.019526       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:35.573741       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:37.821366       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:42.601606       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:46.176212       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:46.939933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:52.578167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:51:57.102215       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:08.775539       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:12.204824       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:16.630456       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:20.491025       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:21.584222       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:26.495116       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:28.545346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:28.987858       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:52:36.993954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0f62645c8df43b1446b6c83a4d18c64e9372efd243bd6f194bd8dfb229d9c803] <==
	E0510 17:40:21.832482       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:40:21.877419       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0510 17:40:21.877504       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:40:22.072548       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:40:22.072597       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:40:22.072629       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:40:22.122387       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:40:22.122740       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:40:22.122766       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:40:22.178604       1 config.go:199] "Starting service config controller"
	I0510 17:40:22.178637       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:40:22.178691       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:40:22.178707       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:40:22.178720       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:40:22.178734       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:40:22.179721       1 config.go:329] "Starting node config controller"
	I0510 17:40:22.179749       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:40:22.279288       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:40:22.279328       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:40:22.279353       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:40:22.279922       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5d41575d5f369fd9c9f0ce72b6cb8fa6a09f530a21e332c6cfa6dac44e671159] <==
	E0510 17:40:12.961204       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:40:12.961454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:40:12.962491       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:40:12.963658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:40:12.966283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:40:12.966322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:40:12.966336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:40:12.966662       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:40:12.968435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 17:40:12.968472       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:40:12.968683       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 17:40:12.969321       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:40:12.969638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:40:13.818536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:40:13.880458       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:40:13.883032       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:40:13.890012       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:40:13.899207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:40:14.002440       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:40:14.097120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 17:40:14.150064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:40:14.248162       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:40:14.296088       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:40:14.334006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0510 17:40:16.047273       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 17:50:27 addons-661496 kubelet[1566]: E0510 17:50:27.190636    1566 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	May 10 17:50:27 addons-661496 kubelet[1566]: E0510 17:50:27.191221    1566 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5ztn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(fa098ebf-237d-4738-96c9-0bbde71445c1): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 17:50:27 addons-661496 kubelet[1566]: E0510 17:50:27.192436    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:50:32 addons-661496 kubelet[1566]: E0510 17:50:32.816008    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:50:40 addons-661496 kubelet[1566]: E0510 17:50:40.816068    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:50:47 addons-661496 kubelet[1566]: I0510 17:50:47.816645    1566 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v4gbz" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:50:47 addons-661496 kubelet[1566]: E0510 17:50:47.817147    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:50:55 addons-661496 kubelet[1566]: E0510 17:50:55.817005    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:51:01 addons-661496 kubelet[1566]: E0510 17:51:01.816777    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:51:09 addons-661496 kubelet[1566]: E0510 17:51:09.817682    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:51:13 addons-661496 kubelet[1566]: E0510 17:51:13.816234    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:51:23 addons-661496 kubelet[1566]: E0510 17:51:23.817108    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:51:24 addons-661496 kubelet[1566]: E0510 17:51:24.816617    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:51:32 addons-661496 kubelet[1566]: I0510 17:51:32.815662    1566 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:51:36 addons-661496 kubelet[1566]: E0510 17:51:36.816500    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:51:37 addons-661496 kubelet[1566]: E0510 17:51:37.815761    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:51:50 addons-661496 kubelet[1566]: E0510 17:51:50.817290    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:51:51 addons-661496 kubelet[1566]: I0510 17:51:51.817817    1566 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v4gbz" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:51:52 addons-661496 kubelet[1566]: E0510 17:51:52.816565    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:52:03 addons-661496 kubelet[1566]: E0510 17:52:03.816687    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:52:05 addons-661496 kubelet[1566]: E0510 17:52:05.816819    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:52:14 addons-661496 kubelet[1566]: E0510 17:52:14.816764    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:52:20 addons-661496 kubelet[1566]: E0510 17:52:20.816869    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:52:26 addons-661496 kubelet[1566]: E0510 17:52:26.816817    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:52:32 addons-661496 kubelet[1566]: E0510 17:52:32.816016    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	
	
	==> storage-provisioner [5aa32f181c6fbb06b9466366d0447e0cb0a52b4c7da9597a4a94534d73372697] <==
	W0510 17:52:14.911662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:16.915678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:16.920737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:18.924567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:18.932321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:20.935426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:20.940788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:22.944404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:22.952116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:24.955652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:24.960647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:26.964231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:26.969377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:28.974709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:28.979846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:30.983779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:30.991412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:32.995544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:33.000604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:35.004006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:35.011750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:37.015749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:37.021117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:39.024779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:52:39.033238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-661496 -n addons-661496
helpers_test.go:261: (dbg) Run:  kubectl --context addons-661496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-661496 describe pod nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-661496 describe pod nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm: exit status 1 (75.914241ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-661496/192.168.39.168
	Start Time:       Sat, 10 May 2025 17:44:38 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:  10.244.0.35
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5ztn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5ztn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  8m2s                    default-scheduler  Successfully assigned default/nginx to addons-661496
	  Normal   Pulling    5m4s (x5 over 8m2s)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m1s (x5 over 8m)       kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m1s (x5 over 8m)       kubelet            Error: ErrImagePull
	  Warning  Failed     2m53s (x20 over 7m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m42s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-661496/192.168.39.168
	Start Time:       Sat, 10 May 2025 17:44:13 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jq4hn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jq4hn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  8m27s                   default-scheduler  Successfully assigned default/test-local-path to addons-661496
	  Warning  Failed     6m51s (x2 over 8m23s)   kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m17s (x5 over 8m26s)   kubelet            Pulling image "busybox:stable"
	  Warning  Failed     5m14s (x5 over 8m23s)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m14s (x3 over 8m6s)    kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:e246aa22ad2cbdfbd19e2a6ca2b275e26245a21920e2b2d0666324cee3f15549: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m10s (x19 over 8m22s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m45s (x21 over 8m22s)  kubelet            Back-off pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fhvxz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9d5dm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-661496 describe pod nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable ingress-dns --alsologtostderr -v=1: (1.183154786s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable ingress --alsologtostderr -v=1: (7.700496178s)
--- FAIL: TestAddons/parallel/Ingress (491.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (231.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-661496 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-661496 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8cfaa910-fd77-46b3-81a2-e85c5ca6e000] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:901: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:901: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-661496 -n addons-661496
addons_test.go:901: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-05-10 17:47:14.191605033 +0000 UTC m=+501.835375078
addons_test.go:901: (dbg) Run:  kubectl --context addons-661496 describe po test-local-path -n default
addons_test.go:901: (dbg) kubectl --context addons-661496 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-661496/192.168.39.168
Start Time:       Sat, 10 May 2025 17:44:13 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jq4hn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-jq4hn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  3m1s                   default-scheduler  Successfully assigned default/test-local-path to addons-661496
Warning  Failed     2m12s (x2 over 2m40s)  kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:e246aa22ad2cbdfbd19e2a6ca2b275e26245a21920e2b2d0666324cee3f15549: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    88s (x4 over 3m)       kubelet            Pulling image "busybox:stable"
Warning  Failed     85s (x2 over 2m57s)    kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     85s (x4 over 2m57s)    kubelet            Error: ErrImagePull
Normal   BackOff    6s (x10 over 2m56s)    kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     6s (x10 over 2m56s)    kubelet            Error: ImagePullBackOff
addons_test.go:901: (dbg) Run:  kubectl --context addons-661496 logs test-local-path -n default
addons_test.go:901: (dbg) Non-zero exit: kubectl --context addons-661496 logs test-local-path -n default: exit status 1 (74.6828ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:901: kubectl --context addons-661496 logs test-local-path -n default: exit status 1
addons_test.go:902: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-661496 -n addons-661496
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 logs -n 25: (1.232232689s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-685238              | download-only-685238 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| start   | -o=json --download-only              | download-only-932669 | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | -p download-only-932669              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-932669              | download-only-932669 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-685238              | download-only-685238 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-932669              | download-only-932669 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| start   | --download-only -p                   | binary-mirror-772258 | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | binary-mirror-772258                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39889               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-772258              | binary-mirror-772258 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| addons  | enable dashboard -p                  | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | addons-661496                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | addons-661496                        |                      |         |         |                     |                     |
	| start   | -p addons-661496 --wait=true         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:43 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:43 UTC | 10 May 25 17:43 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:43 UTC | 10 May 25 17:44 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | -p addons-661496                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-661496 ip                     | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons disable         | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:44 UTC | 10 May 25 17:44 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:45 UTC | 10 May 25 17:45 UTC |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-661496 addons                 | addons-661496        | jenkins | v1.35.0 | 10 May 25 17:45 UTC | 10 May 25 17:45 UTC |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:39:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:39:30.720506 1172998 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:39:30.720759 1172998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:39:30.720769 1172998 out.go:358] Setting ErrFile to fd 2...
	I0510 17:39:30.720773 1172998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:39:30.720983 1172998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:39:30.721652 1172998 out.go:352] Setting JSON to false
	I0510 17:39:30.722607 1172998 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":19315,"bootTime":1746879456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:39:30.722729 1172998 start.go:140] virtualization: kvm guest
	I0510 17:39:30.724714 1172998 out.go:177] * [addons-661496] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:39:30.726285 1172998 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:39:30.726302 1172998 notify.go:220] Checking for updates...
	I0510 17:39:30.728697 1172998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:39:30.729927 1172998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:39:30.731180 1172998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:39:30.732364 1172998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:39:30.733647 1172998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:39:30.735138 1172998 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:39:30.766808 1172998 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 17:39:30.768483 1172998 start.go:304] selected driver: kvm2
	I0510 17:39:30.768498 1172998 start.go:908] validating driver "kvm2" against <nil>
	I0510 17:39:30.768511 1172998 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:39:30.769232 1172998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:39:30.769318 1172998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-1165049/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 17:39:30.784854 1172998 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 17:39:30.784902 1172998 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 17:39:30.785176 1172998 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 17:39:30.785208 1172998 cni.go:84] Creating CNI manager for ""
	I0510 17:39:30.785259 1172998 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:39:30.785268 1172998 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 17:39:30.785322 1172998 start.go:347] cluster config:
	{Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:39:30.785417 1172998 iso.go:125] acquiring lock: {Name:mkc65d6718a5a236dac4e9cf2d61c7062c63896e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:39:30.787251 1172998 out.go:177] * Starting "addons-661496" primary control-plane node in "addons-661496" cluster
	I0510 17:39:30.788371 1172998 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime containerd
	I0510 17:39:30.788416 1172998 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4
	I0510 17:39:30.788430 1172998 cache.go:56] Caching tarball of preloaded images
	I0510 17:39:30.788562 1172998 preload.go:172] Found /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0510 17:39:30.788579 1172998 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on containerd
	I0510 17:39:30.788888 1172998 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/config.json ...
	I0510 17:39:30.788915 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/config.json: {Name:mkfaa167b5e6079cbdf7c27a2f4d987819f61e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:39:30.789104 1172998 start.go:360] acquireMachinesLock for addons-661496: {Name:mk94a427f3fc363027a2f9c3c99b3847312d5b6e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0510 17:39:30.789166 1172998 start.go:364] duration metric: took 44.744µs to acquireMachinesLock for "addons-661496"
	I0510 17:39:30.789206 1172998 start.go:93] Provisioning new machine with config: &{Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0510 17:39:30.789259 1172998 start.go:125] createHost starting for "" (driver="kvm2")
	I0510 17:39:30.790777 1172998 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0510 17:39:30.790969 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:39:30.791011 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:39:30.805778 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0510 17:39:30.806313 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:39:30.806909 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:39:30.806933 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:39:30.807377 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:39:30.807583 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:30.807759 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:30.807902 1172998 start.go:159] libmachine.API.Create for "addons-661496" (driver="kvm2")
	I0510 17:39:30.807938 1172998 client.go:168] LocalClient.Create starting
	I0510 17:39:30.807977 1172998 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem
	I0510 17:39:30.863328 1172998 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem
	I0510 17:39:31.017621 1172998 main.go:141] libmachine: Running pre-create checks...
	I0510 17:39:31.017649 1172998 main.go:141] libmachine: (addons-661496) Calling .PreCreateCheck
	I0510 17:39:31.018175 1172998 main.go:141] libmachine: (addons-661496) Calling .GetConfigRaw
	I0510 17:39:31.018694 1172998 main.go:141] libmachine: Creating machine...
	I0510 17:39:31.018711 1172998 main.go:141] libmachine: (addons-661496) Calling .Create
	I0510 17:39:31.018936 1172998 main.go:141] libmachine: (addons-661496) creating KVM machine...
	I0510 17:39:31.018948 1172998 main.go:141] libmachine: (addons-661496) creating network...
	I0510 17:39:31.020305 1172998 main.go:141] libmachine: (addons-661496) DBG | found existing default KVM network
	I0510 17:39:31.020987 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.020850 1173020 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208dd0}
	I0510 17:39:31.021031 1172998 main.go:141] libmachine: (addons-661496) DBG | created network xml: 
	I0510 17:39:31.021047 1172998 main.go:141] libmachine: (addons-661496) DBG | <network>
	I0510 17:39:31.021056 1172998 main.go:141] libmachine: (addons-661496) DBG |   <name>mk-addons-661496</name>
	I0510 17:39:31.021065 1172998 main.go:141] libmachine: (addons-661496) DBG |   <dns enable='no'/>
	I0510 17:39:31.021072 1172998 main.go:141] libmachine: (addons-661496) DBG |   
	I0510 17:39:31.021081 1172998 main.go:141] libmachine: (addons-661496) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0510 17:39:31.021093 1172998 main.go:141] libmachine: (addons-661496) DBG |     <dhcp>
	I0510 17:39:31.021102 1172998 main.go:141] libmachine: (addons-661496) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0510 17:39:31.021114 1172998 main.go:141] libmachine: (addons-661496) DBG |     </dhcp>
	I0510 17:39:31.021136 1172998 main.go:141] libmachine: (addons-661496) DBG |   </ip>
	I0510 17:39:31.021159 1172998 main.go:141] libmachine: (addons-661496) DBG |   
	I0510 17:39:31.021175 1172998 main.go:141] libmachine: (addons-661496) DBG | </network>
	I0510 17:39:31.021193 1172998 main.go:141] libmachine: (addons-661496) DBG | 
	I0510 17:39:31.026704 1172998 main.go:141] libmachine: (addons-661496) DBG | trying to create private KVM network mk-addons-661496 192.168.39.0/24...
	I0510 17:39:31.093110 1172998 main.go:141] libmachine: (addons-661496) DBG | private KVM network mk-addons-661496 192.168.39.0/24 created
	I0510 17:39:31.093148 1172998 main.go:141] libmachine: (addons-661496) setting up store path in /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496 ...
	I0510 17:39:31.093164 1172998 main.go:141] libmachine: (addons-661496) building disk image from file:///home/jenkins/minikube-integration/20720-1165049/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 17:39:31.093206 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.093077 1173020 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:39:31.093306 1172998 main.go:141] libmachine: (addons-661496) Downloading /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20720-1165049/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso...
	I0510 17:39:31.406155 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.406017 1173020 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa...
	I0510 17:39:31.568126 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.567921 1173020 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/addons-661496.rawdisk...
	I0510 17:39:31.568190 1172998 main.go:141] libmachine: (addons-661496) DBG | Writing magic tar header
	I0510 17:39:31.568205 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496 (perms=drwx------)
	I0510 17:39:31.568221 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049/.minikube/machines (perms=drwxr-xr-x)
	I0510 17:39:31.568232 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049/.minikube (perms=drwxr-xr-x)
	I0510 17:39:31.568239 1172998 main.go:141] libmachine: (addons-661496) DBG | Writing SSH key tar header
	I0510 17:39:31.568272 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration/20720-1165049 (perms=drwxrwxr-x)
	I0510 17:39:31.568320 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:31.568041 1173020 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496 ...
	I0510 17:39:31.568331 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0510 17:39:31.568341 1172998 main.go:141] libmachine: (addons-661496) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0510 17:39:31.568346 1172998 main.go:141] libmachine: (addons-661496) creating domain...
	I0510 17:39:31.568356 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496
	I0510 17:39:31.568365 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines
	I0510 17:39:31.568378 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:39:31.568392 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20720-1165049
	I0510 17:39:31.568403 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0510 17:39:31.568412 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home/jenkins
	I0510 17:39:31.568421 1172998 main.go:141] libmachine: (addons-661496) DBG | checking permissions on dir: /home
	I0510 17:39:31.568428 1172998 main.go:141] libmachine: (addons-661496) DBG | skipping /home - not owner
	I0510 17:39:31.569387 1172998 main.go:141] libmachine: (addons-661496) define libvirt domain using xml: 
	I0510 17:39:31.569404 1172998 main.go:141] libmachine: (addons-661496) <domain type='kvm'>
	I0510 17:39:31.569411 1172998 main.go:141] libmachine: (addons-661496)   <name>addons-661496</name>
	I0510 17:39:31.569416 1172998 main.go:141] libmachine: (addons-661496)   <memory unit='MiB'>4000</memory>
	I0510 17:39:31.569421 1172998 main.go:141] libmachine: (addons-661496)   <vcpu>2</vcpu>
	I0510 17:39:31.569425 1172998 main.go:141] libmachine: (addons-661496)   <features>
	I0510 17:39:31.569432 1172998 main.go:141] libmachine: (addons-661496)     <acpi/>
	I0510 17:39:31.569439 1172998 main.go:141] libmachine: (addons-661496)     <apic/>
	I0510 17:39:31.569446 1172998 main.go:141] libmachine: (addons-661496)     <pae/>
	I0510 17:39:31.569473 1172998 main.go:141] libmachine: (addons-661496)     
	I0510 17:39:31.569504 1172998 main.go:141] libmachine: (addons-661496)   </features>
	I0510 17:39:31.569531 1172998 main.go:141] libmachine: (addons-661496)   <cpu mode='host-passthrough'>
	I0510 17:39:31.569560 1172998 main.go:141] libmachine: (addons-661496)   
	I0510 17:39:31.569569 1172998 main.go:141] libmachine: (addons-661496)   </cpu>
	I0510 17:39:31.569574 1172998 main.go:141] libmachine: (addons-661496)   <os>
	I0510 17:39:31.569580 1172998 main.go:141] libmachine: (addons-661496)     <type>hvm</type>
	I0510 17:39:31.569586 1172998 main.go:141] libmachine: (addons-661496)     <boot dev='cdrom'/>
	I0510 17:39:31.569593 1172998 main.go:141] libmachine: (addons-661496)     <boot dev='hd'/>
	I0510 17:39:31.569601 1172998 main.go:141] libmachine: (addons-661496)     <bootmenu enable='no'/>
	I0510 17:39:31.569611 1172998 main.go:141] libmachine: (addons-661496)   </os>
	I0510 17:39:31.569631 1172998 main.go:141] libmachine: (addons-661496)   <devices>
	I0510 17:39:31.569650 1172998 main.go:141] libmachine: (addons-661496)     <disk type='file' device='cdrom'>
	I0510 17:39:31.569662 1172998 main.go:141] libmachine: (addons-661496)       <source file='/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/boot2docker.iso'/>
	I0510 17:39:31.569672 1172998 main.go:141] libmachine: (addons-661496)       <target dev='hdc' bus='scsi'/>
	I0510 17:39:31.569680 1172998 main.go:141] libmachine: (addons-661496)       <readonly/>
	I0510 17:39:31.569687 1172998 main.go:141] libmachine: (addons-661496)     </disk>
	I0510 17:39:31.569695 1172998 main.go:141] libmachine: (addons-661496)     <disk type='file' device='disk'>
	I0510 17:39:31.569706 1172998 main.go:141] libmachine: (addons-661496)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0510 17:39:31.569729 1172998 main.go:141] libmachine: (addons-661496)       <source file='/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/addons-661496.rawdisk'/>
	I0510 17:39:31.569750 1172998 main.go:141] libmachine: (addons-661496)       <target dev='hda' bus='virtio'/>
	I0510 17:39:31.569762 1172998 main.go:141] libmachine: (addons-661496)     </disk>
	I0510 17:39:31.569773 1172998 main.go:141] libmachine: (addons-661496)     <interface type='network'>
	I0510 17:39:31.569786 1172998 main.go:141] libmachine: (addons-661496)       <source network='mk-addons-661496'/>
	I0510 17:39:31.569795 1172998 main.go:141] libmachine: (addons-661496)       <model type='virtio'/>
	I0510 17:39:31.569807 1172998 main.go:141] libmachine: (addons-661496)     </interface>
	I0510 17:39:31.569815 1172998 main.go:141] libmachine: (addons-661496)     <interface type='network'>
	I0510 17:39:31.569825 1172998 main.go:141] libmachine: (addons-661496)       <source network='default'/>
	I0510 17:39:31.569836 1172998 main.go:141] libmachine: (addons-661496)       <model type='virtio'/>
	I0510 17:39:31.569871 1172998 main.go:141] libmachine: (addons-661496)     </interface>
	I0510 17:39:31.569898 1172998 main.go:141] libmachine: (addons-661496)     <serial type='pty'>
	I0510 17:39:31.569908 1172998 main.go:141] libmachine: (addons-661496)       <target port='0'/>
	I0510 17:39:31.569915 1172998 main.go:141] libmachine: (addons-661496)     </serial>
	I0510 17:39:31.569923 1172998 main.go:141] libmachine: (addons-661496)     <console type='pty'>
	I0510 17:39:31.569932 1172998 main.go:141] libmachine: (addons-661496)       <target type='serial' port='0'/>
	I0510 17:39:31.569940 1172998 main.go:141] libmachine: (addons-661496)     </console>
	I0510 17:39:31.569961 1172998 main.go:141] libmachine: (addons-661496)     <rng model='virtio'>
	I0510 17:39:31.569976 1172998 main.go:141] libmachine: (addons-661496)       <backend model='random'>/dev/random</backend>
	I0510 17:39:31.569989 1172998 main.go:141] libmachine: (addons-661496)     </rng>
	I0510 17:39:31.570001 1172998 main.go:141] libmachine: (addons-661496)     
	I0510 17:39:31.570010 1172998 main.go:141] libmachine: (addons-661496)     
	I0510 17:39:31.570018 1172998 main.go:141] libmachine: (addons-661496)   </devices>
	I0510 17:39:31.570027 1172998 main.go:141] libmachine: (addons-661496) </domain>
	I0510 17:39:31.570039 1172998 main.go:141] libmachine: (addons-661496) 
	I0510 17:39:31.575914 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:4e:79:e3 in network default
	I0510 17:39:31.576613 1172998 main.go:141] libmachine: (addons-661496) starting domain...
	I0510 17:39:31.576637 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:31.576642 1172998 main.go:141] libmachine: (addons-661496) ensuring networks are active...
	I0510 17:39:31.577385 1172998 main.go:141] libmachine: (addons-661496) Ensuring network default is active
	I0510 17:39:31.577737 1172998 main.go:141] libmachine: (addons-661496) Ensuring network mk-addons-661496 is active
	I0510 17:39:31.578199 1172998 main.go:141] libmachine: (addons-661496) getting domain XML...
	I0510 17:39:31.578836 1172998 main.go:141] libmachine: (addons-661496) creating domain...
	I0510 17:39:32.982410 1172998 main.go:141] libmachine: (addons-661496) waiting for IP...
	I0510 17:39:32.983172 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:32.983564 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:32.983619 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:32.983572 1173020 retry.go:31] will retry after 216.769661ms: waiting for domain to come up
	I0510 17:39:33.202181 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:33.202673 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:33.202732 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:33.202651 1173020 retry.go:31] will retry after 340.808751ms: waiting for domain to come up
	I0510 17:39:33.545470 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:33.545971 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:33.546011 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:33.545960 1173020 retry.go:31] will retry after 483.379709ms: waiting for domain to come up
	I0510 17:39:34.030801 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:34.031259 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:34.031287 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:34.031231 1173020 retry.go:31] will retry after 552.15185ms: waiting for domain to come up
	I0510 17:39:34.585072 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:34.585659 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:34.585693 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:34.585606 1173020 retry.go:31] will retry after 664.178924ms: waiting for domain to come up
	I0510 17:39:35.251679 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:35.252266 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:35.252296 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:35.252245 1173020 retry.go:31] will retry after 776.32739ms: waiting for domain to come up
	I0510 17:39:36.029991 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:36.030564 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:36.030590 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:36.030494 1173020 retry.go:31] will retry after 1.081819112s: waiting for domain to come up
	I0510 17:39:37.113967 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:37.114443 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:37.114506 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:37.114406 1173020 retry.go:31] will retry after 1.462566483s: waiting for domain to come up
	I0510 17:39:38.579064 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:38.579515 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:38.579595 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:38.579490 1173020 retry.go:31] will retry after 1.342534125s: waiting for domain to come up
	I0510 17:39:39.924363 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:39.924862 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:39.924893 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:39.924817 1173020 retry.go:31] will retry after 1.720624711s: waiting for domain to come up
	I0510 17:39:41.647711 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:41.648298 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:41.648381 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:41.648284 1173020 retry.go:31] will retry after 2.214923221s: waiting for domain to come up
	I0510 17:39:43.865667 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:43.866173 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:43.866202 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:43.866128 1173020 retry.go:31] will retry after 2.343225628s: waiting for domain to come up
	I0510 17:39:46.211369 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:46.211840 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:46.211874 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:46.211778 1173020 retry.go:31] will retry after 3.192384897s: waiting for domain to come up
	I0510 17:39:49.408277 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:49.408735 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find current IP address of domain addons-661496 in network mk-addons-661496
	I0510 17:39:49.408762 1172998 main.go:141] libmachine: (addons-661496) DBG | I0510 17:39:49.408702 1173020 retry.go:31] will retry after 4.135723361s: waiting for domain to come up
	I0510 17:39:53.547776 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.548260 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has current primary IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.548282 1172998 main.go:141] libmachine: (addons-661496) found domain IP: 192.168.39.168
	I0510 17:39:53.548296 1172998 main.go:141] libmachine: (addons-661496) reserving static IP address...
	I0510 17:39:53.548665 1172998 main.go:141] libmachine: (addons-661496) DBG | unable to find host DHCP lease matching {name: "addons-661496", mac: "52:54:00:9e:78:fe", ip: "192.168.39.168"} in network mk-addons-661496
	I0510 17:39:53.621940 1172998 main.go:141] libmachine: (addons-661496) DBG | Getting to WaitForSSH function...
	I0510 17:39:53.621976 1172998 main.go:141] libmachine: (addons-661496) reserved static IP address 192.168.39.168 for domain addons-661496
	I0510 17:39:53.621989 1172998 main.go:141] libmachine: (addons-661496) waiting for SSH...
	I0510 17:39:53.624195 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.624576 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.624602 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.624739 1172998 main.go:141] libmachine: (addons-661496) DBG | Using SSH client type: external
	I0510 17:39:53.624785 1172998 main.go:141] libmachine: (addons-661496) DBG | Using SSH private key: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa (-rw-------)
	I0510 17:39:53.624824 1172998 main.go:141] libmachine: (addons-661496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0510 17:39:53.624847 1172998 main.go:141] libmachine: (addons-661496) DBG | About to run SSH command:
	I0510 17:39:53.624878 1172998 main.go:141] libmachine: (addons-661496) DBG | exit 0
	I0510 17:39:53.752178 1172998 main.go:141] libmachine: (addons-661496) DBG | SSH cmd err, output: <nil>: 
	I0510 17:39:53.752500 1172998 main.go:141] libmachine: (addons-661496) KVM machine creation complete
	I0510 17:39:53.752843 1172998 main.go:141] libmachine: (addons-661496) Calling .GetConfigRaw
	I0510 17:39:53.753423 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:53.753641 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:53.753768 1172998 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0510 17:39:53.753781 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:39:53.754940 1172998 main.go:141] libmachine: Detecting operating system of created instance...
	I0510 17:39:53.754954 1172998 main.go:141] libmachine: Waiting for SSH to be available...
	I0510 17:39:53.754960 1172998 main.go:141] libmachine: Getting to WaitForSSH function...
	I0510 17:39:53.754985 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:53.757207 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.757549 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.757576 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.757656 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:53.757806 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.757949 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.758079 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:53.758233 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:53.758480 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:53.758493 1172998 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0510 17:39:53.867779 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:39:53.867810 1172998 main.go:141] libmachine: Detecting the provisioner...
	I0510 17:39:53.867822 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:53.870400 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.870814 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.870847 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.870977 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:53.871158 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.871337 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.871480 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:53.871639 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:53.871843 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:53.871855 1172998 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0510 17:39:53.981092 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2024.11.2-dirty
	ID=buildroot
	VERSION_ID=2024.11.2
	PRETTY_NAME="Buildroot 2024.11.2"
	
	I0510 17:39:53.981212 1172998 main.go:141] libmachine: found compatible host: buildroot
	I0510 17:39:53.981227 1172998 main.go:141] libmachine: Provisioning with buildroot...
	I0510 17:39:53.981236 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:53.981553 1172998 buildroot.go:166] provisioning hostname "addons-661496"
	I0510 17:39:53.981595 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:53.981769 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:53.984647 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.984964 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:53.984993 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:53.985238 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:53.985431 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.985567 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:53.985685 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:53.985817 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:53.986022 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:53.986034 1172998 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-661496 && echo "addons-661496" | sudo tee /etc/hostname
	I0510 17:39:54.113974 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-661496
	
	I0510 17:39:54.114006 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.116594 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.116890 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.116935 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.117091 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.117307 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.117496 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.117623 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.117769 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:54.118026 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:54.118043 1172998 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-661496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-661496/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-661496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:39:54.234057 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:39:54.234109 1172998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20720-1165049/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-1165049/.minikube}
	I0510 17:39:54.234140 1172998 buildroot.go:174] setting up certificates
	I0510 17:39:54.234154 1172998 provision.go:84] configureAuth start
	I0510 17:39:54.234169 1172998 main.go:141] libmachine: (addons-661496) Calling .GetMachineName
	I0510 17:39:54.234485 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:54.237262 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.237595 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.237620 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.237780 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.240029 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.240418 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.240445 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.240653 1172998 provision.go:143] copyHostCerts
	I0510 17:39:54.240737 1172998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.pem (1078 bytes)
	I0510 17:39:54.240908 1172998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-1165049/.minikube/cert.pem (1123 bytes)
	I0510 17:39:54.240998 1172998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-1165049/.minikube/key.pem (1679 bytes)
	I0510 17:39:54.241057 1172998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca-key.pem org=jenkins.addons-661496 san=[127.0.0.1 192.168.39.168 addons-661496 localhost minikube]
	I0510 17:39:54.335054 1172998 provision.go:177] copyRemoteCerts
	I0510 17:39:54.335129 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:39:54.335159 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.337915 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.338284 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.338317 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.338472 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.338699 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.338886 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.339024 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.423690 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0510 17:39:54.449850 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:39:54.475540 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:39:54.500529 1172998 provision.go:87] duration metric: took 266.357083ms to configureAuth
	I0510 17:39:54.500559 1172998 buildroot.go:189] setting minikube options for container-runtime
	I0510 17:39:54.500728 1172998 config.go:182] Loaded profile config "addons-661496": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:39:54.500751 1172998 main.go:141] libmachine: Checking connection to Docker...
	I0510 17:39:54.500760 1172998 main.go:141] libmachine: (addons-661496) Calling .GetURL
	I0510 17:39:54.502007 1172998 main.go:141] libmachine: (addons-661496) DBG | using libvirt version 6000000
	I0510 17:39:54.504136 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.504491 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.504519 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.504722 1172998 main.go:141] libmachine: Docker is up and running!
	I0510 17:39:54.504743 1172998 main.go:141] libmachine: Reticulating splines...
	I0510 17:39:54.504755 1172998 client.go:171] duration metric: took 23.696803953s to LocalClient.Create
	I0510 17:39:54.504787 1172998 start.go:167] duration metric: took 23.696884418s to libmachine.API.Create "addons-661496"
	I0510 17:39:54.504800 1172998 start.go:293] postStartSetup for "addons-661496" (driver="kvm2")
	I0510 17:39:54.504817 1172998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:39:54.504839 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.505171 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:39:54.505203 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.508540 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.508964 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.508992 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.509202 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.509386 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.509543 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.509705 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.596131 1172998 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:39:54.600398 1172998 info.go:137] Remote host: Buildroot 2024.11.2
	I0510 17:39:54.600439 1172998 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-1165049/.minikube/addons for local assets ...
	I0510 17:39:54.600508 1172998 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-1165049/.minikube/files for local assets ...
	I0510 17:39:54.600534 1172998 start.go:296] duration metric: took 95.72299ms for postStartSetup
	I0510 17:39:54.600581 1172998 main.go:141] libmachine: (addons-661496) Calling .GetConfigRaw
	I0510 17:39:54.601211 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:54.604092 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.604679 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.604705 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.604960 1172998 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/config.json ...
	I0510 17:39:54.605150 1172998 start.go:128] duration metric: took 23.81587997s to createHost
	I0510 17:39:54.605191 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.607726 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.608040 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.608088 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.608244 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.608452 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.608609 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.608767 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.608912 1172998 main.go:141] libmachine: Using SSH client type: native
	I0510 17:39:54.609145 1172998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0510 17:39:54.609158 1172998 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0510 17:39:54.717125 1172998 main.go:141] libmachine: SSH cmd err, output: <nil>: 1746898794.690126450
	
	I0510 17:39:54.717156 1172998 fix.go:216] guest clock: 1746898794.690126450
	I0510 17:39:54.717164 1172998 fix.go:229] Guest: 2025-05-10 17:39:54.69012645 +0000 UTC Remote: 2025-05-10 17:39:54.605165793 +0000 UTC m=+23.921804666 (delta=84.960657ms)
	I0510 17:39:54.717186 1172998 fix.go:200] guest clock delta is within tolerance: 84.960657ms
	I0510 17:39:54.717192 1172998 start.go:83] releasing machines lock for "addons-661496", held for 23.928011693s
	I0510 17:39:54.717215 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.717532 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:54.720203 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.720567 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.720587 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.720745 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.721284 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.721462 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:39:54.721577 1172998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:39:54.721624 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.721743 1172998 ssh_runner.go:195] Run: cat /version.json
	I0510 17:39:54.721769 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:39:54.724329 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724395 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724682 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.724711 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724744 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:54.724761 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:54.724860 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.724972 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:39:54.725067 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.725125 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:39:54.725213 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.725287 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:39:54.725377 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.725445 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:39:54.836072 1172998 ssh_runner.go:195] Run: systemctl --version
	I0510 17:39:54.841649 1172998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0510 17:39:54.846847 1172998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0510 17:39:54.846928 1172998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:39:54.864868 1172998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0510 17:39:54.864900 1172998 start.go:495] detecting cgroup driver to use...
	I0510 17:39:54.864981 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0510 17:39:54.896622 1172998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0510 17:39:54.910331 1172998 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:39:54.910424 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:39:54.924872 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:39:54.939184 1172998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:39:55.070575 1172998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:39:55.203182 1172998 docker.go:241] disabling docker service ...
	I0510 17:39:55.203295 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:39:55.218970 1172998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:39:55.233309 1172998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:39:55.415615 1172998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:39:55.547476 1172998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:39:55.561319 1172998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:39:55.581380 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0510 17:39:55.591849 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0510 17:39:55.602830 1172998 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0510 17:39:55.602900 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0510 17:39:55.613712 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0510 17:39:55.624676 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0510 17:39:55.636130 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0510 17:39:55.647294 1172998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:39:55.658559 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0510 17:39:55.669462 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0510 17:39:55.680091 1172998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0510 17:39:55.690979 1172998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:39:55.699923 1172998 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0510 17:39:55.699992 1172998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0510 17:39:55.712849 1172998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:39:55.722530 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:39:55.853198 1172998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0510 17:39:55.885576 1172998 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0510 17:39:55.885665 1172998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0510 17:39:55.889889 1172998 retry.go:31] will retry after 1.227640556s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0510 17:39:57.118342 1172998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0510 17:39:57.123859 1172998 start.go:563] Will wait 60s for crictl version
	I0510 17:39:57.123940 1172998 ssh_runner.go:195] Run: which crictl
	I0510 17:39:57.127736 1172998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:39:57.170227 1172998 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0510 17:39:57.170314 1172998 ssh_runner.go:195] Run: containerd --version
	I0510 17:39:57.193745 1172998 ssh_runner.go:195] Run: containerd --version
	I0510 17:39:57.216797 1172998 out.go:177] * Preparing Kubernetes v1.33.0 on containerd 1.7.23 ...
	I0510 17:39:57.218232 1172998 main.go:141] libmachine: (addons-661496) Calling .GetIP
	I0510 17:39:57.221128 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:57.221479 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:39:57.221509 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:39:57.221671 1172998 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0510 17:39:57.225714 1172998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:39:57.238746 1172998 kubeadm.go:875] updating cluster {Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.
0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:39:57.238855 1172998 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime containerd
	I0510 17:39:57.238909 1172998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:39:57.269645 1172998 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.33.0". assuming images are not preloaded.
	I0510 17:39:57.269754 1172998 ssh_runner.go:195] Run: which lz4
	I0510 17:39:57.273673 1172998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0510 17:39:57.277866 1172998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0510 17:39:57.277897 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412760592 bytes)
	I0510 17:39:58.495413 1172998 containerd.go:563] duration metric: took 1.221786307s to copy over tarball
	I0510 17:39:58.495486 1172998 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0510 17:40:00.411079 1172998 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.915560713s)
	I0510 17:40:00.411123 1172998 containerd.go:570] duration metric: took 1.915678216s to extract the tarball
	I0510 17:40:00.411135 1172998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0510 17:40:00.449462 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:40:00.591311 1172998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0510 17:40:00.626460 1172998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:40:00.676302 1172998 retry.go:31] will retry after 146.30517ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-05-10T17:40:00Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0510 17:40:00.823262 1172998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:40:00.860235 1172998 containerd.go:627] all images are preloaded for containerd runtime.
	I0510 17:40:00.860266 1172998 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:40:00.860281 1172998 kubeadm.go:926] updating node { 192.168.39.168 8443 v1.33.0 containerd true true} ...
	I0510 17:40:00.860447 1172998 kubeadm.go:938] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-661496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:40:00.860520 1172998 ssh_runner.go:195] Run: sudo crictl info
	I0510 17:40:00.894826 1172998 cni.go:84] Creating CNI manager for ""
	I0510 17:40:00.894854 1172998 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:40:00.894865 1172998 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 17:40:00.894887 1172998 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-661496 NodeName:addons-661496 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:40:00.895003 1172998 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-661496"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.168"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:40:00.895087 1172998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:40:00.908329 1172998 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:40:00.908412 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:40:00.919246 1172998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0510 17:40:00.937996 1172998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:40:00.956415 1172998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2309 bytes)
	I0510 17:40:00.974702 1172998 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0510 17:40:00.978400 1172998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:40:00.991443 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:40:01.127827 1172998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:40:01.157295 1172998 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496 for IP: 192.168.39.168
	I0510 17:40:01.157341 1172998 certs.go:194] generating shared ca certs ...
	I0510 17:40:01.157367 1172998 certs.go:226] acquiring lock for ca certs: {Name:mk7942eb7613cd1b5cd28fde706e9943dadc4445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:01.157557 1172998 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key
	I0510 17:40:02.028851 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt ...
	I0510 17:40:02.028885 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt: {Name:mk5ebf958cd39484a03f4716b32fa9f4828e8749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.029112 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key ...
	I0510 17:40:02.029227 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key: {Name:mk4211e8556b6df47299b54db279621eed96de58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.029425 1172998 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key
	I0510 17:40:02.127880 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.crt ...
	I0510 17:40:02.127916 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.crt: {Name:mk28239bcb974f081392efd547f702f946f7c7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.128129 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key ...
	I0510 17:40:02.128145 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key: {Name:mk2dbb6673c0b09dac77a81715c8449b9119dd34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.128259 1172998 certs.go:256] generating profile certs ...
	I0510 17:40:02.128328 1172998 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.key
	I0510 17:40:02.128345 1172998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt with IP's: []
	I0510 17:40:02.770121 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt ...
	I0510 17:40:02.770166 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: {Name:mk5605ed7493b5cf3448d4e4ad6ad143470a92d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.770372 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.key ...
	I0510 17:40:02.770386 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.key: {Name:mka2b295ec69120c17a47a8dc487e313fb162658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:02.770470 1172998 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d
	I0510 17:40:02.770492 1172998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168]
	I0510 17:40:03.260761 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d ...
	I0510 17:40:03.260798 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d: {Name:mk8f5cb5f23362e694715c1d70642a0a777ecafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.260966 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d ...
	I0510 17:40:03.260979 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d: {Name:mkeadb9846f0e1676f0f96179d337fe535471558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.261053 1172998 certs.go:381] copying /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt.10321f4d -> /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt
	I0510 17:40:03.261124 1172998 certs.go:385] copying /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key.10321f4d -> /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key
	I0510 17:40:03.261177 1172998 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key
	I0510 17:40:03.261195 1172998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt with IP's: []
	I0510 17:40:03.886950 1172998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt ...
	I0510 17:40:03.886986 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt: {Name:mkfec4fdcc46584efd5d0043ad841b8e7cc4bc42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.887173 1172998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key ...
	I0510 17:40:03.887186 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key: {Name:mke3c5e4245a18f9aaa36ef8c4cdebf12a7b1abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:03.887369 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:40:03.887411 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:40:03.887433 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:40:03.887454 1172998 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-1165049/.minikube/certs/key.pem (1679 bytes)
	I0510 17:40:03.888191 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:40:03.916833 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:40:03.943310 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:40:03.969821 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0510 17:40:03.997414 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0510 17:40:04.025343 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0510 17:40:04.052855 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:40:04.080081 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:40:04.107449 1172998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:40:04.135126 1172998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:40:04.153955 1172998 ssh_runner.go:195] Run: openssl version
	I0510 17:40:04.159879 1172998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:40:04.171789 1172998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:40:04.176464 1172998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:40:04.176534 1172998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:40:04.183117 1172998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:40:04.195686 1172998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:40:04.200042 1172998 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 17:40:04.200101 1172998 kubeadm.go:392] StartCluster: {Name:addons-661496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 C
lusterName:addons-661496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:40:04.200222 1172998 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0510 17:40:04.200319 1172998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:40:04.235324 1172998 cri.go:89] found id: ""
	I0510 17:40:04.235423 1172998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:40:04.247260 1172998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 17:40:04.258635 1172998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 17:40:04.270480 1172998 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 17:40:04.270508 1172998 kubeadm.go:157] found existing configuration files:
	
	I0510 17:40:04.270572 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 17:40:04.281640 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 17:40:04.281714 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 17:40:04.292607 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 17:40:04.303437 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 17:40:04.303539 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 17:40:04.314398 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 17:40:04.324904 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 17:40:04.324986 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 17:40:04.335708 1172998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 17:40:04.345884 1172998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 17:40:04.345963 1172998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 17:40:04.356965 1172998 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0510 17:40:04.511871 1172998 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 17:40:16.470240 1172998 kubeadm.go:310] [init] Using Kubernetes version: v1.33.0
	I0510 17:40:16.470328 1172998 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 17:40:16.470431 1172998 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 17:40:16.470586 1172998 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 17:40:16.470731 1172998 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0510 17:40:16.470814 1172998 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 17:40:16.472503 1172998 out.go:235]   - Generating certificates and keys ...
	I0510 17:40:16.472603 1172998 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 17:40:16.472676 1172998 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 17:40:16.472809 1172998 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0510 17:40:16.472911 1172998 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0510 17:40:16.472991 1172998 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0510 17:40:16.473071 1172998 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0510 17:40:16.473157 1172998 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0510 17:40:16.473350 1172998 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-661496 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0510 17:40:16.473437 1172998 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0510 17:40:16.473641 1172998 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-661496 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0510 17:40:16.473764 1172998 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0510 17:40:16.473840 1172998 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0510 17:40:16.473883 1172998 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0510 17:40:16.473933 1172998 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 17:40:16.473975 1172998 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 17:40:16.474027 1172998 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0510 17:40:16.474080 1172998 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 17:40:16.474166 1172998 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 17:40:16.474255 1172998 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 17:40:16.474384 1172998 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 17:40:16.474456 1172998 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 17:40:16.475885 1172998 out.go:235]   - Booting up control plane ...
	I0510 17:40:16.475990 1172998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 17:40:16.476075 1172998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 17:40:16.476193 1172998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 17:40:16.476297 1172998 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 17:40:16.476390 1172998 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 17:40:16.476429 1172998 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 17:40:16.476581 1172998 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0510 17:40:16.476672 1172998 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0510 17:40:16.476760 1172998 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001254777s
	I0510 17:40:16.476843 1172998 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0510 17:40:16.476909 1172998 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.168:8443/livez
	I0510 17:40:16.476998 1172998 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0510 17:40:16.477065 1172998 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0510 17:40:16.477139 1172998 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.232300516s
	I0510 17:40:16.477235 1172998 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.909365289s
	I0510 17:40:16.477341 1172998 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.002606208s
	I0510 17:40:16.477518 1172998 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0510 17:40:16.477719 1172998 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0510 17:40:16.477806 1172998 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0510 17:40:16.478078 1172998 kubeadm.go:310] [mark-control-plane] Marking the node addons-661496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0510 17:40:16.478129 1172998 kubeadm.go:310] [bootstrap-token] Using token: kf8nq8.faatt9qa2ldbhogm
	I0510 17:40:16.479704 1172998 out.go:235]   - Configuring RBAC rules ...
	I0510 17:40:16.479800 1172998 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0510 17:40:16.479877 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0510 17:40:16.480043 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0510 17:40:16.480185 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0510 17:40:16.480337 1172998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0510 17:40:16.480430 1172998 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0510 17:40:16.480535 1172998 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0510 17:40:16.480574 1172998 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0510 17:40:16.480612 1172998 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0510 17:40:16.480618 1172998 kubeadm.go:310] 
	I0510 17:40:16.480673 1172998 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0510 17:40:16.480680 1172998 kubeadm.go:310] 
	I0510 17:40:16.480749 1172998 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0510 17:40:16.480755 1172998 kubeadm.go:310] 
	I0510 17:40:16.480777 1172998 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0510 17:40:16.480839 1172998 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0510 17:40:16.480885 1172998 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0510 17:40:16.480891 1172998 kubeadm.go:310] 
	I0510 17:40:16.480936 1172998 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0510 17:40:16.480945 1172998 kubeadm.go:310] 
	I0510 17:40:16.480992 1172998 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0510 17:40:16.480998 1172998 kubeadm.go:310] 
	I0510 17:40:16.481041 1172998 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0510 17:40:16.481104 1172998 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0510 17:40:16.481184 1172998 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0510 17:40:16.481194 1172998 kubeadm.go:310] 
	I0510 17:40:16.481269 1172998 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0510 17:40:16.481339 1172998 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0510 17:40:16.481346 1172998 kubeadm.go:310] 
	I0510 17:40:16.481432 1172998 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kf8nq8.faatt9qa2ldbhogm \
	I0510 17:40:16.481525 1172998 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ffe61921efc4d62c2b0265e9e4d4ecc78e39339829cff2fd65f8ba0081188365 \
	I0510 17:40:16.481548 1172998 kubeadm.go:310] 	--control-plane 
	I0510 17:40:16.481553 1172998 kubeadm.go:310] 
	I0510 17:40:16.481627 1172998 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0510 17:40:16.481634 1172998 kubeadm.go:310] 
	I0510 17:40:16.481702 1172998 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kf8nq8.faatt9qa2ldbhogm \
	I0510 17:40:16.481814 1172998 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ffe61921efc4d62c2b0265e9e4d4ecc78e39339829cff2fd65f8ba0081188365 
	I0510 17:40:16.481828 1172998 cni.go:84] Creating CNI manager for ""
	I0510 17:40:16.481835 1172998 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:40:16.483427 1172998 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0510 17:40:16.484586 1172998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0510 17:40:16.497436 1172998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0510 17:40:16.523293 1172998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 17:40:16.523387 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:16.523448 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-661496 minikube.k8s.io/updated_at=2025_05_10T17_40_16_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4 minikube.k8s.io/name=addons-661496 minikube.k8s.io/primary=true
	I0510 17:40:16.565710 1172998 ops.go:34] apiserver oom_adj: -16
	I0510 17:40:16.680542 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:17.180839 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:17.681443 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:18.180793 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:18.680685 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:19.181602 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:19.681072 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:20.180878 1172998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 17:40:20.277992 1172998 kubeadm.go:1105] duration metric: took 3.754682071s to wait for elevateKubeSystemPrivileges
	I0510 17:40:20.278037 1172998 kubeadm.go:394] duration metric: took 16.077940348s to StartCluster
	I0510 17:40:20.278063 1172998 settings.go:142] acquiring lock: {Name:mk469c480b22625281eadd5ebdc6a04348599b1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:20.278227 1172998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:40:20.278842 1172998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-1165049/kubeconfig: {Name:mk677f0619615b74c93431771f158c6db83d5db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:40:20.279095 1172998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0510 17:40:20.279139 1172998 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0510 17:40:20.279302 1172998 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0510 17:40:20.279392 1172998 config.go:182] Loaded profile config "addons-661496": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:40:20.279446 1172998 addons.go:69] Setting ingress-dns=true in profile "addons-661496"
	I0510 17:40:20.279455 1172998 addons.go:69] Setting inspektor-gadget=true in profile "addons-661496"
	I0510 17:40:20.279471 1172998 addons.go:238] Setting addon inspektor-gadget=true in "addons-661496"
	I0510 17:40:20.279482 1172998 addons.go:69] Setting default-storageclass=true in profile "addons-661496"
	I0510 17:40:20.279501 1172998 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-661496"
	I0510 17:40:20.279561 1172998 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-661496"
	I0510 17:40:20.279563 1172998 addons.go:69] Setting storage-provisioner=true in profile "addons-661496"
	I0510 17:40:20.279582 1172998 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-661496"
	I0510 17:40:20.279594 1172998 addons.go:238] Setting addon storage-provisioner=true in "addons-661496"
	I0510 17:40:20.279602 1172998 addons.go:69] Setting cloud-spanner=true in profile "addons-661496"
	I0510 17:40:20.279629 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279628 1172998 addons.go:69] Setting volcano=true in profile "addons-661496"
	I0510 17:40:20.279647 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279652 1172998 addons.go:238] Setting addon cloud-spanner=true in "addons-661496"
	I0510 17:40:20.279663 1172998 addons.go:238] Setting addon volcano=true in "addons-661496"
	I0510 17:40:20.279636 1172998 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-661496"
	I0510 17:40:20.279681 1172998 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-661496"
	I0510 17:40:20.279689 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279693 1172998 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-661496"
	I0510 17:40:20.279707 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279724 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279736 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279472 1172998 addons.go:238] Setting addon ingress-dns=true in "addons-661496"
	I0510 17:40:20.279781 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279518 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279445 1172998 addons.go:69] Setting yakd=true in profile "addons-661496"
	I0510 17:40:20.280186 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280195 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280205 1172998 addons.go:238] Setting addon yakd=true in "addons-661496"
	I0510 17:40:20.280213 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.279522 1172998 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-661496"
	I0510 17:40:20.280224 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.279537 1172998 addons.go:69] Setting volumesnapshots=true in profile "addons-661496"
	I0510 17:40:20.280241 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280246 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280252 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280260 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.279548 1172998 addons.go:69] Setting metrics-server=true in profile "addons-661496"
	I0510 17:40:20.280268 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280277 1172998 addons.go:238] Setting addon metrics-server=true in "addons-661496"
	I0510 17:40:20.280291 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.280467 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280498 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280572 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.280603 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.281085 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.281161 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.281367 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.281417 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.281508 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.281550 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280230 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.279528 1172998 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-661496"
	I0510 17:40:20.282484 1172998 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-661496"
	I0510 17:40:20.282963 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.283114 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.280232 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.285028 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.286020 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.297056 1172998 out.go:177] * Verifying Kubernetes components...
	I0510 17:40:20.279531 1172998 addons.go:69] Setting gcp-auth=true in profile "addons-661496"
	I0510 17:40:20.297550 1172998 mustload.go:65] Loading cluster: addons-661496
	I0510 17:40:20.297853 1172998 config.go:182] Loaded profile config "addons-661496": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:40:20.298334 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.298540 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.298838 1172998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:40:20.279539 1172998 addons.go:69] Setting ingress=true in profile "addons-661496"
	I0510 17:40:20.299157 1172998 addons.go:238] Setting addon ingress=true in "addons-661496"
	I0510 17:40:20.299239 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.299776 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.299915 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.307215 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0510 17:40:20.312444 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0510 17:40:20.280263 1172998 addons.go:238] Setting addon volumesnapshots=true in "addons-661496"
	I0510 17:40:20.313615 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.279550 1172998 addons.go:69] Setting registry=true in profile "addons-661496"
	I0510 17:40:20.313823 1172998 addons.go:238] Setting addon registry=true in "addons-661496"
	I0510 17:40:20.313871 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.314084 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.314305 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.314391 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.314508 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.317106 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0510 17:40:20.317427 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0510 17:40:20.317694 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.318125 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.318226 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.318311 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.321417 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.321444 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.321927 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.322699 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.323609 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.323634 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.323991 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.324064 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.324091 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.324189 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.324415 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0510 17:40:20.325051 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.325082 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.325052 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.325096 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.325215 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.325811 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.325884 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.326583 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.328654 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.328698 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.328940 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.328958 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.329085 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0510 17:40:20.329266 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.334031 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.334145 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.334725 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0510 17:40:20.335070 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.335083 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.335149 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I0510 17:40:20.335244 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.335293 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.335511 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.336624 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.336667 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.337419 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.337978 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.337996 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.338437 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.338622 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.340760 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.344334 1172998 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-661496"
	I0510 17:40:20.344399 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.344896 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.344945 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.345875 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.345902 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.346545 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.347379 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.347480 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.358036 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0510 17:40:20.358677 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.359323 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.359359 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.359897 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.360799 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.360831 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.366725 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
	I0510 17:40:20.375728 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.376394 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.376432 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.376886 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.377163 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.377736 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0510 17:40:20.377926 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0510 17:40:20.378444 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.379388 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.379928 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.379953 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.380277 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.380299 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.380722 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.380809 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0510 17:40:20.381357 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.381403 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.381987 1172998 addons.go:238] Setting addon default-storageclass=true in "addons-661496"
	I0510 17:40:20.382039 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.382084 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0510 17:40:20.382405 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.382447 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.382524 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0510 17:40:20.382859 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.382968 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.383427 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.383453 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.383846 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.384012 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.384076 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.384873 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45873
	I0510 17:40:20.385047 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.385090 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0510 17:40:20.385558 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.385659 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.386158 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.386186 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.386466 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0510 17:40:20.386599 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.386623 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36749
	I0510 17:40:20.386815 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.387036 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.387118 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.387126 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.387282 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.387499 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38363
	I0510 17:40:20.387802 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.387836 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.387976 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.387987 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.388138 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.388181 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.388246 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.388310 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.388358 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.388481 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.388531 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.388767 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.388901 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.388914 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.389161 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.389282 1172998 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.1
	I0510 17:40:20.389612 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.389674 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.390550 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.390570 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.391364 1172998 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.33
	I0510 17:40:20.391604 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0510 17:40:20.391758 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.392187 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.392390 1172998 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.1
	I0510 17:40:20.393188 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:40:20.393315 1172998 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0510 17:40:20.393968 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0510 17:40:20.394003 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.393346 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:20.394442 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.394482 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.395436 1172998 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.1
	I0510 17:40:20.395527 1172998 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:40:20.395543 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:40:20.395562 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.393365 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.397059 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0510 17:40:20.398000 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33537
	I0510 17:40:20.398892 1172998 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 17:40:20.398910 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0510 17:40:20.398933 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.399313 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.399392 1172998 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0510 17:40:20.399407 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480231 bytes)
	I0510 17:40:20.399435 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.401697 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.401721 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.402992 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403009 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403112 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.403334 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.403406 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.403428 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403537 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.403835 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.403855 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.403878 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.403889 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.403917 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.404011 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.404353 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.404402 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.404529 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I0510 17:40:20.404570 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.404640 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.404684 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.404697 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.404823 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.404920 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.405006 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.405894 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0510 17:40:20.406011 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0510 17:40:20.408472 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.408479 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0510 17:40:20.408961 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.408990 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409003 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.409064 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409093 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409142 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409240 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.409551 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.409584 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.409616 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.409642 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.409657 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.409656 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.409737 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.409868 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.409871 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.409923 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.410161 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.410229 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.410278 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.410429 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.410449 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.410553 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.410558 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.410671 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.410678 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.410915 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.410955 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.411008 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.411013 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.411207 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.411392 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.412038 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.412047 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.412057 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.412058 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.412850 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.412857 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.412909 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.413297 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.413369 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.413410 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.413462 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.413492 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.414732 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.414744 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.416214 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.416435 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.416532 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0510 17:40:20.416660 1172998 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.1
	I0510 17:40:20.416863 1172998 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0510 17:40:20.418146 1172998 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 17:40:20.418168 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0510 17:40:20.418187 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.418744 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0510 17:40:20.418822 1172998 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.39.0
	I0510 17:40:20.418942 1172998 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 17:40:20.418956 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0510 17:40:20.418974 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.420226 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0510 17:40:20.420244 1172998 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0510 17:40:20.420266 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.420588 1172998 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0510 17:40:20.420615 1172998 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0510 17:40:20.420633 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.422504 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.423079 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.423109 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.423143 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0510 17:40:20.423447 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.423682 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.423866 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.424033 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.425508 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0510 17:40:20.425731 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.426377 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.426405 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.426756 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.426793 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.427321 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.427342 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.427724 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.427751 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.427867 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0510 17:40:20.427980 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.428168 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.428284 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.428344 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.428355 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.428486 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.428501 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.428535 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.428634 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.428633 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.428633 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.429197 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.430428 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0510 17:40:20.431635 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0510 17:40:20.432870 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0510 17:40:20.432912 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0510 17:40:20.433535 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.433692 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0510 17:40:20.434514 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.434574 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.434677 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.435237 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.435256 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.435240 1172998 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0510 17:40:20.435683 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.435755 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.435877 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.436387 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:20.436412 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:20.436604 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0510 17:40:20.436622 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0510 17:40:20.436643 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.439003 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.439081 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42383
	I0510 17:40:20.439644 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.439756 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0510 17:40:20.440099 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.440126 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.440561 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.440612 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.440825 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.441373 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.441396 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.441510 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0510 17:40:20.441632 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.441860 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.442008 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.442035 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.442067 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.442250 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.442634 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.442876 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.443039 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.443661 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.443962 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:40:20.445249 1172998 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0510 17:40:20.446342 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:40:20.447435 1172998 out.go:177]   - Using image docker.io/busybox:stable
	I0510 17:40:20.447660 1172998 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 17:40:20.447677 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0510 17:40:20.447698 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.447805 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
	I0510 17:40:20.448374 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.449196 1172998 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 17:40:20.449216 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0510 17:40:20.449877 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.450851 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.450871 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.451658 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.451749 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.451796 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.452349 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.452383 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.452403 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.452621 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.452961 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.453177 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.454153 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.454425 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.454773 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.454799 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.455035 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.455208 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.455407 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.455553 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.456199 1172998 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0510 17:40:20.457564 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0510 17:40:20.457586 1172998 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0510 17:40:20.457607 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.460253 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0510 17:40:20.461030 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.461140 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.461516 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I0510 17:40:20.461657 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.461679 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.461721 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.461753 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.461948 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.462131 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.462139 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.462138 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.462343 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.462393 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.462499 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.462655 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.462685 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.463058 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.463246 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.464706 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.464956 1172998 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:40:20.464973 1172998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:40:20.464990 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.465184 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.465981 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0510 17:40:20.466487 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:20.467031 1172998 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0510 17:40:20.467202 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:20.467219 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:20.467599 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:20.467797 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:20.468246 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.468465 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:40:20.468481 1172998 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:40:20.468498 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.469212 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.469239 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.469390 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:20.469761 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.470102 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.470282 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.470439 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.471024 1172998 out.go:177]   - Using image docker.io/registry:3.0.0
	I0510 17:40:20.471896 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.472367 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.472400 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.472689 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.472859 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.473043 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.473207 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.473896 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0510 17:40:20.475173 1172998 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0510 17:40:20.475184 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0510 17:40:20.475198 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:20.478203 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.478661 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:20.478692 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:20.478809 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:20.478937 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:20.479002 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:20.479063 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:20.633896 1172998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0510 17:40:20.657259 1172998 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46478->192.168.39.168:22: read: connection reset by peer
	I0510 17:40:20.657301 1172998 retry.go:31] will retry after 243.584195ms: ssh: handshake failed: read tcp 192.168.39.1:46478->192.168.39.168:22: read: connection reset by peer
	W0510 17:40:20.657387 1172998 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46492->192.168.39.168:22: read: connection reset by peer
	I0510 17:40:20.657397 1172998 retry.go:31] will retry after 192.996834ms: ssh: handshake failed: read tcp 192.168.39.1:46492->192.168.39.168:22: read: connection reset by peer
	I0510 17:40:20.662103 1172998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:40:20.983386 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0510 17:40:20.985179 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:40:21.064550 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 17:40:21.123191 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0510 17:40:21.144014 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 17:40:21.169058 1172998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0510 17:40:21.169094 1172998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0510 17:40:21.255187 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 17:40:21.263131 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:40:21.263156 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0510 17:40:21.267100 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0510 17:40:21.267125 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0510 17:40:21.287558 1172998 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0510 17:40:21.287582 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0510 17:40:21.417189 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 17:40:21.486484 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 17:40:21.500434 1172998 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0510 17:40:21.500465 1172998 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0510 17:40:21.604959 1172998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0510 17:40:21.604995 1172998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0510 17:40:21.725738 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0510 17:40:21.726475 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:40:21.796808 1172998 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0510 17:40:21.796839 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0510 17:40:21.871014 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:40:21.871043 1172998 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:40:21.963683 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0510 17:40:21.963713 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0510 17:40:21.996922 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0510 17:40:21.996950 1172998 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0510 17:40:22.342513 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0510 17:40:22.342542 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0510 17:40:22.356535 1172998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:40:22.356560 1172998 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 17:40:22.362837 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0510 17:40:22.366737 1172998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0510 17:40:22.366772 1172998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0510 17:40:22.417129 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0510 17:40:22.417169 1172998 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0510 17:40:22.585777 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0510 17:40:22.585813 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0510 17:40:22.679690 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:40:22.682372 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0510 17:40:22.682392 1172998 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0510 17:40:22.731527 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0510 17:40:22.731571 1172998 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0510 17:40:22.751903 1172998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.117957188s)
	I0510 17:40:22.751947 1172998 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0510 17:40:22.751969 1172998 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.089837396s)
	I0510 17:40:22.752889 1172998 node_ready.go:35] waiting up to 6m0s for node "addons-661496" to be "Ready" ...
	I0510 17:40:22.761770 1172998 node_ready.go:49] node "addons-661496" is "Ready"
	I0510 17:40:22.761802 1172998 node_ready.go:38] duration metric: took 8.883307ms for node "addons-661496" to be "Ready" ...
	I0510 17:40:22.761819 1172998 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:40:22.761884 1172998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:40:23.014160 1172998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0510 17:40:23.014191 1172998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0510 17:40:23.257564 1172998 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-661496" context rescaled to 1 replicas
	I0510 17:40:23.318996 1172998 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0510 17:40:23.319027 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0510 17:40:23.382368 1172998 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:40:23.382397 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0510 17:40:23.529409 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0510 17:40:23.529438 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0510 17:40:23.720142 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0510 17:40:23.732745 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:40:23.786281 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0510 17:40:23.786321 1172998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0510 17:40:24.129437 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0510 17:40:24.129471 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0510 17:40:24.426077 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.442645042s)
	I0510 17:40:24.426137 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:24.426151 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:24.426622 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:24.426670 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:24.426691 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:24.426694 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:24.426705 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:24.427055 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:24.427072 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:24.524010 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0510 17:40:24.524037 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0510 17:40:25.051509 1172998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 17:40:25.051541 1172998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0510 17:40:25.179711 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 17:40:25.348066 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.362838877s)
	I0510 17:40:25.348138 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348173 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.348149 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.283566236s)
	I0510 17:40:25.348221 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348238 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.348546 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.348557 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.348564 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:25.348574 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348582 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.348582 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.348608 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.348623 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:25.348638 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:25.348646 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:25.349047 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.349050 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.349058 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:25.349057 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:25.349047 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:25.349070 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:27.475958 1172998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0510 17:40:27.476001 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:27.480092 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:27.480622 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:27.480645 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:27.480872 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:27.481117 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:27.481308 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:27.481495 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:28.079226 1172998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0510 17:40:28.290769 1172998 addons.go:238] Setting addon gcp-auth=true in "addons-661496"
	I0510 17:40:28.290863 1172998 host.go:66] Checking if "addons-661496" exists ...
	I0510 17:40:28.291335 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:28.291385 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:28.309401 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I0510 17:40:28.309915 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:28.310464 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:28.310495 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:28.310895 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:28.311526 1172998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:40:28.311565 1172998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:40:28.327688 1172998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I0510 17:40:28.328219 1172998 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:40:28.328749 1172998 main.go:141] libmachine: Using API Version  1
	I0510 17:40:28.328781 1172998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:40:28.329175 1172998 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:40:28.329396 1172998 main.go:141] libmachine: (addons-661496) Calling .GetState
	I0510 17:40:28.331278 1172998 main.go:141] libmachine: (addons-661496) Calling .DriverName
	I0510 17:40:28.331545 1172998 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0510 17:40:28.331578 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHHostname
	I0510 17:40:28.334625 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:28.335054 1172998 main.go:141] libmachine: (addons-661496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:78:fe", ip: ""} in network mk-addons-661496: {Iface:virbr1 ExpiryTime:2025-05-10 18:39:46 +0000 UTC Type:0 Mac:52:54:00:9e:78:fe Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-661496 Clientid:01:52:54:00:9e:78:fe}
	I0510 17:40:28.335087 1172998 main.go:141] libmachine: (addons-661496) DBG | domain addons-661496 has defined IP address 192.168.39.168 and MAC address 52:54:00:9e:78:fe in network mk-addons-661496
	I0510 17:40:28.335372 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHPort
	I0510 17:40:28.335576 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHKeyPath
	I0510 17:40:28.335781 1172998 main.go:141] libmachine: (addons-661496) Calling .GetSSHUsername
	I0510 17:40:28.335938 1172998 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/addons-661496/id_rsa Username:docker}
	I0510 17:40:32.670911 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.547675412s)
	I0510 17:40:32.670959 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.526904485s)
	I0510 17:40:32.670989 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671004 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671008 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (11.415793955s)
	I0510 17:40:32.671035 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671043 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671014 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671006 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671125 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.253908737s)
	I0510 17:40:32.671231 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671245 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671257 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.184732834s)
	I0510 17:40:32.671292 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671305 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671425 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.945652272s)
	I0510 17:40:32.671441 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671449 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671517 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.94502032s)
	I0510 17:40:32.671533 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671541 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671581 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.30871244s)
	I0510 17:40:32.671672 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.671674 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.671685 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.671693 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671700 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671715 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.671723 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.671732 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671738 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671797 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.671803 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.671812 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.671821 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.671971 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672000 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672014 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.992287254s)
	I0510 17:40:32.672047 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672058 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672127 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.672168 1172998 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.910244887s)
	I0510 17:40:32.672177 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672184 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672191 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672191 1172998 api_server.go:72] duration metric: took 12.393014419s to wait for apiserver process to appear ...
	I0510 17:40:32.672197 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672199 1172998 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:40:32.672248 1172998 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0510 17:40:32.672302 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672313 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672321 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672328 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672547 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.672575 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672582 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672590 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672592 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.952397085s)
	I0510 17:40:32.672613 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.672616 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.672627 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672637 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672644 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672759 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.939982644s)
	I0510 17:40:32.672806 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	W0510 17:40:32.672807 1172998 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 17:40:32.672837 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672842 1172998 retry.go:31] will retry after 270.785919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 17:40:32.672844 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672597 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.672856 1172998 addons.go:479] Verifying addon metrics-server=true in "addons-661496"
	I0510 17:40:32.672926 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.672934 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.672944 1172998 addons.go:479] Verifying addon ingress=true in "addons-661496"
	I0510 17:40:32.675102 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.675132 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.675138 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.675415 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.675451 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.675459 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.675468 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.675474 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.676284 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676318 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.676324 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.676563 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676587 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.676592 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.676601 1172998 addons.go:479] Verifying addon registry=true in "addons-661496"
	I0510 17:40:32.676668 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.677495 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.677499 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.677525 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.677535 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.677574 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.676687 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.677716 1172998 out.go:177] * Verifying ingress addon...
	I0510 17:40:32.676712 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.677794 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.677818 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.677852 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.677943 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.677979 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.677987 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.676722 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.678013 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.678023 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.678030 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.678071 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676727 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.676743 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.678176 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.678189 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.678215 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.678239 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.678529 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.677615 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.679629 1172998 out.go:177] * Verifying registry addon...
	I0510 17:40:32.679674 1172998 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0510 17:40:32.679975 1172998 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-661496 service yakd-dashboard -n yakd-dashboard
	
	I0510 17:40:32.681581 1172998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0510 17:40:32.710585 1172998 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0510 17:40:32.715250 1172998 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0510 17:40:32.715275 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:32.715409 1172998 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0510 17:40:32.715436 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:32.733284 1172998 api_server.go:141] control plane version: v1.33.0
	I0510 17:40:32.733338 1172998 api_server.go:131] duration metric: took 61.110993ms to wait for apiserver health ...
	I0510 17:40:32.733353 1172998 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:40:32.769273 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.769301 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.769628 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.769646 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.769652 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	W0510 17:40:32.769760 1172998 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0510 17:40:32.782708 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:32.782729 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:32.783069 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:32.783155 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:32.783171 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:32.824981 1172998 system_pods.go:59] 17 kube-system pods found
	I0510 17:40:32.825040 1172998 system_pods.go:61] "amd-gpu-device-plugin-v4gbz" [f294f291-744b-4850-90b4-50d91dab8406] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 17:40:32.825051 1172998 system_pods.go:61] "coredns-674b8bbfcf-6m8wh" [de3b4b5b-9d45-48fa-bc02-7e68d0f4a719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:32.825064 1172998 system_pods.go:61] "coredns-674b8bbfcf-tdjvp" [b934ce97-eb9a-44e0-8dce-b5f8bb54f550] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:32.825071 1172998 system_pods.go:61] "csi-hostpath-attacher-0" [85d52be6-3924-4ae3-bad8-06764ecf38a6] Pending
	I0510 17:40:32.825077 1172998 system_pods.go:61] "etcd-addons-661496" [631566ce-1617-43c0-aae6-20963bfed3d4] Running
	I0510 17:40:32.825083 1172998 system_pods.go:61] "kube-apiserver-addons-661496" [8de722c2-b091-489b-9e78-d16d797f7fe7] Running
	I0510 17:40:32.825088 1172998 system_pods.go:61] "kube-controller-manager-addons-661496" [3538f9c0-52da-492c-8efd-07edc4fb3790] Running
	I0510 17:40:32.825098 1172998 system_pods.go:61] "kube-ingress-dns-minikube" [51ebd7c0-1306-4a22-bd30-9f9e94fca514] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 17:40:32.825104 1172998 system_pods.go:61] "kube-proxy-prpxb" [385933ac-4f81-4f5f-a113-9b4ee3a18d3b] Running
	I0510 17:40:32.825110 1172998 system_pods.go:61] "kube-scheduler-addons-661496" [d3a212e1-5d32-4025-bdd7-3dfbe0fb0246] Running
	I0510 17:40:32.825117 1172998 system_pods.go:61] "metrics-server-7fbb699795-5w57m" [2e1beec0-5626-4c7b-88bc-8260d997758b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 17:40:32.825138 1172998 system_pods.go:61] "nvidia-device-plugin-daemonset-j9pr5" [14fb66ef-5095-4274-8657-2c667308fa0d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 17:40:32.825151 1172998 system_pods.go:61] "registry-694bd45846-zdzh4" [4ba351e4-9daa-43da-8b99-54cf78e8b8d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 17:40:32.825166 1172998 system_pods.go:61] "registry-proxy-8pcc7" [b49e8001-c050-47a2-8471-50c2355d968d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 17:40:32.825176 1172998 system_pods.go:61] "snapshot-controller-68b874b76f-88ddv" [21c08b41-9091-4a26-a852-2590e7c0ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:32.825187 1172998 system_pods.go:61] "snapshot-controller-68b874b76f-wz768" [ee20255a-a346-401c-aa60-c4d336342082] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:32.825201 1172998 system_pods.go:61] "storage-provisioner" [4b684c4b-a952-48da-bc38-3f6663c462e7] Running
	I0510 17:40:32.825213 1172998 system_pods.go:74] duration metric: took 91.852459ms to wait for pod list to return data ...
	I0510 17:40:32.825224 1172998 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:40:32.916771 1172998 default_sa.go:45] found service account: "default"
	I0510 17:40:32.916807 1172998 default_sa.go:55] duration metric: took 91.573454ms for default service account to be created ...
	I0510 17:40:32.916818 1172998 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 17:40:32.944142 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 17:40:33.008687 1172998 system_pods.go:86] 18 kube-system pods found
	I0510 17:40:33.008733 1172998 system_pods.go:89] "amd-gpu-device-plugin-v4gbz" [f294f291-744b-4850-90b4-50d91dab8406] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 17:40:33.008744 1172998 system_pods.go:89] "coredns-674b8bbfcf-6m8wh" [de3b4b5b-9d45-48fa-bc02-7e68d0f4a719] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:33.008757 1172998 system_pods.go:89] "coredns-674b8bbfcf-tdjvp" [b934ce97-eb9a-44e0-8dce-b5f8bb54f550] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 17:40:33.008766 1172998 system_pods.go:89] "csi-hostpath-attacher-0" [85d52be6-3924-4ae3-bad8-06764ecf38a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 17:40:33.008771 1172998 system_pods.go:89] "csi-hostpathplugin-q57z4" [f716e96c-2b81-41bd-a505-ef8bab6002bf] Pending
	I0510 17:40:33.008777 1172998 system_pods.go:89] "etcd-addons-661496" [631566ce-1617-43c0-aae6-20963bfed3d4] Running
	I0510 17:40:33.008782 1172998 system_pods.go:89] "kube-apiserver-addons-661496" [8de722c2-b091-489b-9e78-d16d797f7fe7] Running
	I0510 17:40:33.008788 1172998 system_pods.go:89] "kube-controller-manager-addons-661496" [3538f9c0-52da-492c-8efd-07edc4fb3790] Running
	I0510 17:40:33.008798 1172998 system_pods.go:89] "kube-ingress-dns-minikube" [51ebd7c0-1306-4a22-bd30-9f9e94fca514] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 17:40:33.008805 1172998 system_pods.go:89] "kube-proxy-prpxb" [385933ac-4f81-4f5f-a113-9b4ee3a18d3b] Running
	I0510 17:40:33.008812 1172998 system_pods.go:89] "kube-scheduler-addons-661496" [d3a212e1-5d32-4025-bdd7-3dfbe0fb0246] Running
	I0510 17:40:33.008820 1172998 system_pods.go:89] "metrics-server-7fbb699795-5w57m" [2e1beec0-5626-4c7b-88bc-8260d997758b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 17:40:33.008828 1172998 system_pods.go:89] "nvidia-device-plugin-daemonset-j9pr5" [14fb66ef-5095-4274-8657-2c667308fa0d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 17:40:33.008846 1172998 system_pods.go:89] "registry-694bd45846-zdzh4" [4ba351e4-9daa-43da-8b99-54cf78e8b8d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 17:40:33.008857 1172998 system_pods.go:89] "registry-proxy-8pcc7" [b49e8001-c050-47a2-8471-50c2355d968d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 17:40:33.008866 1172998 system_pods.go:89] "snapshot-controller-68b874b76f-88ddv" [21c08b41-9091-4a26-a852-2590e7c0ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:33.008877 1172998 system_pods.go:89] "snapshot-controller-68b874b76f-wz768" [ee20255a-a346-401c-aa60-c4d336342082] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 17:40:33.008884 1172998 system_pods.go:89] "storage-provisioner" [4b684c4b-a952-48da-bc38-3f6663c462e7] Running
	I0510 17:40:33.008897 1172998 system_pods.go:126] duration metric: took 92.070192ms to wait for k8s-apps to be running ...
	I0510 17:40:33.008911 1172998 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 17:40:33.008978 1172998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:40:33.299242 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:33.301255 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:33.572025 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.39224087s)
	I0510 17:40:33.572121 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:33.572167 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:33.572147 1172998 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.240568926s)
	I0510 17:40:33.572503 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:33.572519 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:33.572530 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:33.572537 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:33.572784 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:33.572811 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:33.572818 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:33.572829 1172998 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-661496"
	I0510 17:40:33.573838 1172998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 17:40:33.574625 1172998 out.go:177] * Verifying csi-hostpath-driver addon...
	I0510 17:40:33.575959 1172998 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0510 17:40:33.576945 1172998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0510 17:40:33.576963 1172998 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0510 17:40:33.576945 1172998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0510 17:40:33.604900 1172998 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0510 17:40:33.604928 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:33.678097 1172998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0510 17:40:33.678135 1172998 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0510 17:40:33.690417 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:33.697715 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:33.775850 1172998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 17:40:33.775888 1172998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0510 17:40:33.818557 1172998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 17:40:34.080794 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:34.188754 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:34.287745 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:34.580392 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:34.648382 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.704147097s)
	I0510 17:40:34.648441 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.648455 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.648474 1172998 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.639464519s)
	I0510 17:40:34.648516 1172998 system_svc.go:56] duration metric: took 1.63960172s WaitForService to wait for kubelet
	I0510 17:40:34.648534 1172998 kubeadm.go:578] duration metric: took 14.369355476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 17:40:34.648566 1172998 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:40:34.648728 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.648787 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.648809 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.648820 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.648792 1172998 main.go:141] libmachine: (addons-661496) DBG | Closing plugin on server side
	I0510 17:40:34.649036 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.649050 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.651756 1172998 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0510 17:40:34.651778 1172998 node_conditions.go:123] node cpu capacity is 2
	I0510 17:40:34.651792 1172998 node_conditions.go:105] duration metric: took 3.219506ms to run NodePressure ...
	I0510 17:40:34.651809 1172998 start.go:241] waiting for startup goroutines ...
	I0510 17:40:34.692997 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:34.693002 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:34.866950 1172998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.048332101s)
	I0510 17:40:34.867011 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.867027 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.867364 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.867418 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.867432 1172998 main.go:141] libmachine: Making call to close driver server
	I0510 17:40:34.867440 1172998 main.go:141] libmachine: (addons-661496) Calling .Close
	I0510 17:40:34.867753 1172998 main.go:141] libmachine: Successfully made call to close driver server
	I0510 17:40:34.867773 1172998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0510 17:40:34.868862 1172998 addons.go:479] Verifying addon gcp-auth=true in "addons-661496"
	I0510 17:40:34.870512 1172998 out.go:177] * Verifying gcp-auth addon...
	I0510 17:40:34.872659 1172998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0510 17:40:34.878541 1172998 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0510 17:40:35.081453 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:35.183209 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:35.184783 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:35.580670 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:35.683704 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:35.684812 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:36.082824 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:36.185508 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:36.185707 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:36.581408 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:36.683243 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:36.684968 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:37.080761 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:37.457827 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:37.457902 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:37.581216 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:37.682920 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:37.684504 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:38.081174 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:38.182925 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:38.184549 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:38.581105 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:38.682786 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:38.684301 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:39.080260 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:39.183474 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:39.185318 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:39.580416 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:39.682986 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:39.685111 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:40.210513 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:40.212177 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:40.212527 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:40.580396 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:40.684105 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:40.684442 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:41.081321 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:41.182910 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:41.184590 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:41.581041 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:41.685519 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:41.685771 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:42.184402 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:42.184945 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:42.188229 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:42.581232 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:42.683324 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:42.684712 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:43.080759 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:43.183956 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:43.185168 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:43.580455 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:43.683440 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:43.685105 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:44.081736 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:44.183242 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:44.184756 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:44.581605 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:44.684445 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:44.684455 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:45.081044 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:45.183075 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:45.184941 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:45.580782 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:45.684011 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:45.685836 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:46.082003 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:46.184338 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:46.185564 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:46.580918 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:46.683848 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:46.684348 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:47.081457 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:47.183481 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:47.185220 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:47.580652 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:47.683701 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:47.685242 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:48.081318 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:48.183534 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:48.185108 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:48.580446 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:48.683352 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:48.684847 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:49.080808 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:49.185978 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:49.186063 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:49.584342 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:49.686888 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:49.688825 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:50.081535 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:50.183951 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:50.184447 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:50.581090 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:50.683127 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:50.684628 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:51.081303 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:51.184532 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:51.185150 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:51.580667 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:51.688888 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:51.688962 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:52.080920 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:52.183735 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:52.184978 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:52.580322 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:52.682937 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:52.684668 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:53.080961 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:53.186964 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:53.187190 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:53.580103 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:53.683050 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:53.684758 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:54.080877 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:54.182733 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:54.184170 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:54.579991 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:54.682922 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:54.684571 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:55.081903 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:55.182588 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:55.184081 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:55.580865 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:55.683187 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:55.684937 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:56.080990 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:56.182641 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:56.184962 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:56.727950 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:56.728357 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:56.729990 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:57.081040 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:57.183015 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:57.185176 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:57.580564 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:57.683267 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:57.684825 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:58.080673 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:58.184002 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:58.184989 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:58.580631 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:58.683345 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:58.684928 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:59.081682 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:59.184431 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:59.186216 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:40:59.581493 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:40:59.684058 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:40:59.685510 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:00.081006 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:00.183105 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:00.185777 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:00.581308 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:00.683205 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:00.684806 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:01.080662 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:01.183645 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:01.184448 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:01.581311 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:01.683246 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:01.685051 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:02.080897 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:02.186491 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:02.186517 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:02.581157 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:02.683671 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:02.686197 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:03.081250 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:03.183794 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:03.185739 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 17:41:03.581228 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:03.683265 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:03.684915 1172998 kapi.go:107] duration metric: took 31.003329796s to wait for kubernetes.io/minikube-addons=registry ...
	I0510 17:41:04.080836 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:04.183653 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:04.580453 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:04.683528 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:05.081126 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:05.183202 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:05.581145 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:05.682816 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:06.081140 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:06.183002 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:06.581229 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:06.683152 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:07.080923 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:07.183889 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:07.581046 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:07.684440 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:08.080794 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:08.183839 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:08.580945 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:08.684653 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:09.081789 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:09.183084 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:09.580851 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:09.683870 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:10.080313 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:10.183486 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:10.581525 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:10.683044 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:11.154261 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:11.182912 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:11.581152 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:11.683379 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:12.080719 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:12.183846 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:12.581434 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:12.683161 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:13.081956 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:13.184229 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:13.581549 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:13.683601 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:14.081093 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:14.183600 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:14.580730 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:14.683228 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:15.080339 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:15.183325 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:15.584013 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:15.683555 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:16.080422 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:16.183637 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:16.584050 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:16.683076 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:17.083032 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:17.189566 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:17.585161 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:17.684511 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:18.082658 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:18.183560 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:18.583266 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:18.683327 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:19.080691 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:19.184486 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:19.581623 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:19.683300 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:20.163713 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:20.184042 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:20.586731 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:20.683415 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:21.081576 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:21.183344 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:21.581174 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:21.683565 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:22.080962 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:22.184114 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:22.581341 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:22.683445 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:23.081200 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:23.183571 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:23.581424 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:23.818334 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:24.089098 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:24.184335 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:24.586605 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:24.684842 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:25.085150 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:25.184312 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:25.580768 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:25.682722 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:26.080228 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:26.183157 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:26.580683 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:26.684126 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:27.081132 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:27.182745 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:27.582158 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:27.683913 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:28.081058 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:28.183598 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:28.661956 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:28.682626 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:29.080693 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:29.183676 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:29.580348 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:29.683177 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:30.080532 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:30.183630 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:30.581087 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:30.685322 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:31.081044 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:31.183336 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:31.597518 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:31.683461 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:32.081638 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:32.449842 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:32.580686 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:32.683557 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:33.080613 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:33.183265 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:33.583045 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:33.684425 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:34.080547 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:34.183557 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:34.580536 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:34.684086 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:35.081658 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:35.184914 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:35.588387 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:35.683361 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:36.081303 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:36.183657 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:36.580759 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:36.683721 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:37.080861 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:37.185270 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:37.581853 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:37.682528 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:38.082598 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:38.183867 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:38.580878 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:38.683321 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:39.081294 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:39.183278 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:39.580938 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:39.682981 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:40.081143 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:40.182943 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:40.581046 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:40.683825 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:41.081383 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:41.183183 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:41.580674 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:41.683691 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:42.081064 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:42.184649 1172998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 17:41:42.583847 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:42.685032 1172998 kapi.go:107] duration metric: took 1m10.00535342s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0510 17:41:43.087022 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:43.588107 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:44.080701 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:44.582672 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:45.084079 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:45.582703 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:46.080935 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:46.581871 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:47.080356 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:47.581014 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:48.081520 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:48.581818 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 17:41:49.081584 1172998 kapi.go:107] duration metric: took 1m15.504636912s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0510 17:41:58.376389 1172998 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0510 17:41:58.376415 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:41:58.876728 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:41:59.376724 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:41:59.876718 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:00.376972 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:00.876779 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:01.376633 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:01.876727 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:02.376605 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:02.876631 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:03.376928 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:03.877261 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:04.375645 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:04.876435 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:05.376016 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:05.876977 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:06.376309 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:06.877422 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:07.375557 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:07.876372 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:08.375429 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:08.875870 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:09.376421 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:09.875946 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:10.376536 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:10.875949 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:11.377634 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:11.876404 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:12.376066 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:12.876469 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:13.376814 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:13.876777 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:14.376536 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:14.876479 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:15.375681 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:15.876228 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:16.376491 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:16.876004 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:17.376729 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:17.876387 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:18.380816 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:18.877810 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:19.376712 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:19.876505 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:20.376270 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:20.877011 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:21.375577 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:21.876815 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:22.376529 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:22.876262 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:23.375848 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:23.876780 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:24.376569 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:24.876787 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:25.376349 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:25.876871 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:26.376573 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:26.876262 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:27.375454 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:27.879734 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:28.376371 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:28.876095 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:29.375776 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:29.876633 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:30.375897 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:30.875942 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:31.376292 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:31.876594 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:32.376110 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:32.876506 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:33.376034 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:33.877017 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:34.376779 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:34.877546 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:35.376400 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:35.876671 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:36.377042 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:36.876370 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:37.375892 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:37.876924 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:38.376710 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:38.876280 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:39.376029 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:39.891401 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:40.376347 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:40.876571 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:41.376070 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:41.882187 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:42.376472 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:42.876607 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:43.375706 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:43.877249 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:44.375979 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:44.877290 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:45.376116 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:45.876724 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:46.376690 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:46.876675 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:47.376285 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:47.876219 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:48.375671 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:48.876330 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:49.376111 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:49.876391 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:50.375642 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:50.875909 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:51.376647 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:51.877246 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:52.375751 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:52.876489 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:53.375914 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:53.877644 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:54.376365 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:54.876457 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:55.376133 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:55.876574 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:56.376953 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:56.877893 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:57.376643 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:57.881047 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:58.375550 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:58.876668 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:59.377069 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:42:59.876390 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:00.375937 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:00.876746 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:01.376792 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:01.878732 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:02.376717 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:02.877021 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:03.376817 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:03.881208 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:04.376639 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:04.876422 1172998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 17:43:05.376519 1172998 kapi.go:107] duration metric: took 2m30.503855969s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0510 17:43:05.378320 1172998 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-661496 cluster.
	I0510 17:43:05.379801 1172998 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0510 17:43:05.381023 1172998 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0510 17:43:05.382555 1172998 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, volcano, metrics-server, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0510 17:43:05.383825 1172998 addons.go:514] duration metric: took 2m45.104557537s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns volcano metrics-server nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0510 17:43:05.383885 1172998 start.go:246] waiting for cluster config update ...
	I0510 17:43:05.383912 1172998 start.go:255] writing updated cluster config ...
	I0510 17:43:05.384286 1172998 ssh_runner.go:195] Run: rm -f paused
	I0510 17:43:05.391228 1172998 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:43:05.395222 1172998 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-6m8wh" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.399776 1172998 pod_ready.go:94] pod "coredns-674b8bbfcf-6m8wh" is "Ready"
	I0510 17:43:05.399799 1172998 pod_ready.go:86] duration metric: took 4.552136ms for pod "coredns-674b8bbfcf-6m8wh" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.401658 1172998 pod_ready.go:83] waiting for pod "etcd-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.406056 1172998 pod_ready.go:94] pod "etcd-addons-661496" is "Ready"
	I0510 17:43:05.406148 1172998 pod_ready.go:86] duration metric: took 4.470056ms for pod "etcd-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.409345 1172998 pod_ready.go:83] waiting for pod "kube-apiserver-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.414276 1172998 pod_ready.go:94] pod "kube-apiserver-addons-661496" is "Ready"
	I0510 17:43:05.414298 1172998 pod_ready.go:86] duration metric: took 4.930227ms for pod "kube-apiserver-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.416686 1172998 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.796365 1172998 pod_ready.go:94] pod "kube-controller-manager-addons-661496" is "Ready"
	I0510 17:43:05.796398 1172998 pod_ready.go:86] duration metric: took 379.688776ms for pod "kube-controller-manager-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:05.996431 1172998 pod_ready.go:83] waiting for pod "kube-proxy-prpxb" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.395671 1172998 pod_ready.go:94] pod "kube-proxy-prpxb" is "Ready"
	I0510 17:43:06.395705 1172998 pod_ready.go:86] duration metric: took 399.242909ms for pod "kube-proxy-prpxb" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.596013 1172998 pod_ready.go:83] waiting for pod "kube-scheduler-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.995243 1172998 pod_ready.go:94] pod "kube-scheduler-addons-661496" is "Ready"
	I0510 17:43:06.995276 1172998 pod_ready.go:86] duration metric: took 399.231107ms for pod "kube-scheduler-addons-661496" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:43:06.995286 1172998 pod_ready.go:40] duration metric: took 1.604022064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:43:07.042926 1172998 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:43:07.044837 1172998 out.go:177] * Done! kubectl is now configured to use "addons-661496" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e46853278a58       56cc512116c8f       3 minutes ago       Running             busybox                   0                   17761bf6d8782       busybox
	2e55dbfa3ad40       ee44bc2368033       5 minutes ago       Running             controller                0                   43f8b636ba100       ingress-nginx-controller-7c9f76cd49-w87h8
	ea099c471e05f       a62eeff05ba51       5 minutes ago       Exited              patch                     1                   aeb99ebb9ef90       ingress-nginx-admission-patch-9d5dm
	69c61255f8eb6       a62eeff05ba51       5 minutes ago       Exited              create                    0                   dd018e9b62112       ingress-nginx-admission-create-fhvxz
	37626d630e4f1       e16d1e3a10667       6 minutes ago       Running             local-path-provisioner    0                   da00fa4d62c35       local-path-provisioner-76f89f99b5-fnv92
	62defc806e3d4       30dd67412fdea       6 minutes ago       Running             minikube-ingress-dns      0                   60e7f8a3fe996       kube-ingress-dns-minikube
	4061d8ab8a59a       d5e667c0f2bb6       6 minutes ago       Running             amd-gpu-device-plugin     0                   36c61f3680774       amd-gpu-device-plugin-v4gbz
	5aa32f181c6fb       6e38f40d628db       6 minutes ago       Running             storage-provisioner       0                   1a635936bed99       storage-provisioner
	96f113e1188db       1cf5f116067c6       6 minutes ago       Running             coredns                   0                   d92079ffb222b       coredns-674b8bbfcf-6m8wh
	0f62645c8df43       f1184a0bd7fe5       6 minutes ago       Running             kube-proxy                0                   6725536a3ce63       kube-proxy-prpxb
	5d41575d5f369       8d72586a76469       7 minutes ago       Running             kube-scheduler            0                   caf411c817260       kube-scheduler-addons-661496
	ffb6f242cbf4c       1d579cb6d6967       7 minutes ago       Running             kube-controller-manager   0                   d1975cce9669e       kube-controller-manager-addons-661496
	b0e9d7bab929d       499038711c081       7 minutes ago       Running             etcd                      0                   77fc9fe62c7a0       etcd-addons-661496
	f0e94709db491       6ba9545b2183e       7 minutes ago       Running             kube-apiserver            0                   36baa3f15bdb7       kube-apiserver-addons-661496
	
	
	==> containerd <==
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.272123148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d133027e7871a035b5f64098d338f544a9fbafd8cce171a0bba20e72df83027f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.272300705Z" level=info msg="RemovePodSandbox \"d133027e7871a035b5f64098d338f544a9fbafd8cce171a0bba20e72df83027f\" returns successfully"
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.273011795Z" level=info msg="StopPodSandbox for \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\""
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.310150539Z" level=info msg="TearDown network for sandbox \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\" successfully"
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.310288528Z" level=info msg="StopPodSandbox for \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\" returns successfully"
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.310945588Z" level=info msg="RemovePodSandbox for \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\""
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.311174101Z" level=info msg="Forcibly stopping sandbox \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\""
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.335644876Z" level=info msg="TearDown network for sandbox \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\" successfully"
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.341765941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	May 10 17:45:17 addons-661496 containerd[847]: time="2025-05-10T17:45:17.342020770Z" level=info msg="RemovePodSandbox \"6412b7197c763d44b76070cb8379268ae48d28eac6337648666d1e3b77293968\" returns successfully"
	May 10 17:45:23 addons-661496 containerd[847]: time="2025-05-10T17:45:23.817126523Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	May 10 17:45:23 addons-661496 containerd[847]: time="2025-05-10T17:45:23.820056096Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:45:24 addons-661496 containerd[847]: time="2025-05-10T17:45:24.429418352Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:45:26 addons-661496 containerd[847]: time="2025-05-10T17:45:26.090319016Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 17:45:26 addons-661496 containerd[847]: time="2025-05-10T17:45:26.090390836Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	May 10 17:45:46 addons-661496 containerd[847]: time="2025-05-10T17:45:46.817334616Z" level=info msg="PullImage \"busybox:stable\""
	May 10 17:45:46 addons-661496 containerd[847]: time="2025-05-10T17:45:46.820864691Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:45:47 addons-661496 containerd[847]: time="2025-05-10T17:45:47.421835224Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:45:49 addons-661496 containerd[847]: time="2025-05-10T17:45:49.442809159Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 17:45:49 addons-661496 containerd[847]: time="2025-05-10T17:45:49.442949990Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=21178"
	May 10 17:46:12 addons-661496 containerd[847]: time="2025-05-10T17:46:12.817092398Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	May 10 17:46:12 addons-661496 containerd[847]: time="2025-05-10T17:46:12.820810873Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:46:13 addons-661496 containerd[847]: time="2025-05-10T17:46:13.415800246Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 17:46:15 addons-661496 containerd[847]: time="2025-05-10T17:46:15.080314309Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 17:46:15 addons-661496 containerd[847]: time="2025-05-10T17:46:15.080356043Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	
	
	==> coredns [96f113e1188dbabe774b4d904716ffca7a49f5575a945e1c5a06730298098808] <==
	[INFO] 10.244.0.8:37407 - 1651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000653913s
	[INFO] 10.244.0.8:37407 - 24733 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000110023s
	[INFO] 10.244.0.8:37407 - 35665 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000359567s
	[INFO] 10.244.0.8:37407 - 9024 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000213984s
	[INFO] 10.244.0.8:37407 - 6547 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000185165s
	[INFO] 10.244.0.8:37407 - 55798 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000127176s
	[INFO] 10.244.0.8:37407 - 54927 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000200961s
	[INFO] 10.244.0.8:51946 - 35519 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149475s
	[INFO] 10.244.0.8:51946 - 35794 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00037073s
	[INFO] 10.244.0.8:58194 - 53577 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000230425s
	[INFO] 10.244.0.8:58194 - 53359 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00033287s
	[INFO] 10.244.0.8:53557 - 45474 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102418s
	[INFO] 10.244.0.8:53557 - 45715 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000186201s
	[INFO] 10.244.0.8:39461 - 56505 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000121214s
	[INFO] 10.244.0.8:39461 - 56287 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169677s
	[INFO] 10.244.0.27:54699 - 52656 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000763266s
	[INFO] 10.244.0.27:52320 - 24998 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000586839s
	[INFO] 10.244.0.27:44324 - 8059 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000183014s
	[INFO] 10.244.0.27:35935 - 48888 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000217207s
	[INFO] 10.244.0.27:53338 - 29968 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155345s
	[INFO] 10.244.0.27:47993 - 59292 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000196871s
	[INFO] 10.244.0.27:54710 - 20206 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004822824s
	[INFO] 10.244.0.27:55930 - 47065 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005644589s
	[INFO] 10.244.0.32:58617 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00035506s
	[INFO] 10.244.0.32:59494 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000211617s
	
	
	==> describe nodes <==
	Name:               addons-661496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-661496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=addons-661496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_40_16_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-661496
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:40:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-661496
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:47:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:44:51 +0000   Sat, 10 May 2025 17:40:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:44:51 +0000   Sat, 10 May 2025 17:40:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:44:51 +0000   Sat, 10 May 2025 17:40:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:44:51 +0000   Sat, 10 May 2025 17:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    addons-661496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912748Ki
	  pods:               110
	System Info:
	  Machine ID:                 35093bd7e517431a9628c06138768a2f
	  System UUID:                35093bd7-e517-431a-9628-c06138768a2f
	  Boot ID:                    d04c19cd-be15-4f30-98c2-b9909b79a3a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  ingress-nginx               ingress-nginx-controller-7c9f76cd49-w87h8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         6m45s
	  kube-system                 amd-gpu-device-plugin-v4gbz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 coredns-674b8bbfcf-6m8wh                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m55s
	  kube-system                 etcd-addons-661496                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m
	  kube-system                 kube-apiserver-addons-661496                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-controller-manager-addons-661496        200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-proxy-prpxb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 kube-scheduler-addons-661496                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m50s
	  local-path-storage          local-path-provisioner-76f89f99b5-fnv92      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m53s                kube-proxy       
	  Normal  Starting                 7m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m6s (x8 over 7m6s)  kubelet          Node addons-661496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m6s (x8 over 7m6s)  kubelet          Node addons-661496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m6s (x7 over 7m6s)  kubelet          Node addons-661496 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m                   kubelet          Node addons-661496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m                   kubelet          Node addons-661496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m                   kubelet          Node addons-661496 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m59s                kubelet          Node addons-661496 status is now: NodeReady
	  Normal  RegisteredNode           6m56s                node-controller  Node addons-661496 event: Registered Node addons-661496 in Controller
	
	
	==> dmesg <==
	[  +0.642859] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.000057] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.015220] kauditd_printk_skb: 109 callbacks suppressed
	[  +8.528125] kauditd_printk_skb: 129 callbacks suppressed
	[May10 17:41] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.459069] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.351598] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.526521] kauditd_printk_skb: 45 callbacks suppressed
	[  +0.871595] kauditd_printk_skb: 19 callbacks suppressed
	[  +2.985223] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.010178] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.025628] kauditd_printk_skb: 21 callbacks suppressed
	[May10 17:43] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.301687] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000104] kauditd_printk_skb: 20 callbacks suppressed
	[May10 17:44] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.969275] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.712489] kauditd_printk_skb: 34 callbacks suppressed
	[  +1.484431] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.163032] kauditd_printk_skb: 15 callbacks suppressed
	[  +3.363842] kauditd_printk_skb: 14 callbacks suppressed
	[  +3.029581] kauditd_printk_skb: 36 callbacks suppressed
	[May10 17:45] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [b0e9d7bab929dd860fc9fe1b1ebd5f5ba31e3fd422cfd59a025ce19eace353e2] <==
	{"level":"info","ts":"2025-05-10T17:40:56.713088Z","caller":"traceutil/trace.go:171","msg":"trace[1835380644] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1084; }","duration":"138.891845ms","start":"2025-05-10T17:40:56.574188Z","end":"2025-05-10T17:40:56.713080Z","steps":["trace[1835380644] 'agreement among raft nodes before linearized reading'  (duration: 138.858244ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:40:56.713176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.673589ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:40:56.713197Z","caller":"traceutil/trace.go:171","msg":"trace[449374613] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1084; }","duration":"147.693621ms","start":"2025-05-10T17:40:56.565496Z","end":"2025-05-10T17:40:56.713190Z","steps":["trace[449374613] 'agreement among raft nodes before linearized reading'  (duration: 147.665363ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:41:11.147195Z","caller":"traceutil/trace.go:171","msg":"trace[322139401] linearizableReadLoop","detail":"{readStateIndex:1169; appliedIndex:1168; }","duration":"276.724755ms","start":"2025-05-10T17:41:10.870449Z","end":"2025-05-10T17:41:11.147174Z","steps":["trace[322139401] 'read index received'  (duration: 276.550526ms)","trace[322139401] 'applied index is now lower than readState.Index'  (duration: 173.811µs)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T17:41:11.147269Z","caller":"traceutil/trace.go:171","msg":"trace[1632780145] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"331.849089ms","start":"2025-05-10T17:41:10.815414Z","end":"2025-05-10T17:41:11.147263Z","steps":["trace[1632780145] 'process raft request'  (duration: 331.614968ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:11.147375Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-05-10T17:41:10.815400Z","time spent":"331.886483ms","remote":"127.0.0.1:43618","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1140 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-05-10T17:41:11.147561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.102405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:11.147597Z","caller":"traceutil/trace.go:171","msg":"trace[111070395] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1141; }","duration":"277.160719ms","start":"2025-05-10T17:41:10.870428Z","end":"2025-05-10T17:41:11.147589Z","steps":["trace[111070395] 'agreement among raft nodes before linearized reading'  (duration: 277.104791ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:41:14.454502Z","caller":"traceutil/trace.go:171","msg":"trace[1114780893] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"178.979213ms","start":"2025-05-10T17:41:14.275506Z","end":"2025-05-10T17:41:14.454485Z","steps":["trace[1114780893] 'process raft request'  (duration: 178.719391ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:20.157316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.468883ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17287232900780463074 > lease_revoke:<id:6fe896bb487712c3>","response":"size:29"}
	{"level":"info","ts":"2025-05-10T17:41:20.157491Z","caller":"traceutil/trace.go:171","msg":"trace[312139809] linearizableReadLoop","detail":"{readStateIndex:1208; appliedIndex:1207; }","duration":"286.455789ms","start":"2025-05-10T17:41:19.871012Z","end":"2025-05-10T17:41:20.157468Z","steps":["trace[312139809] 'read index received'  (duration: 142.74821ms)","trace[312139809] 'applied index is now lower than readState.Index'  (duration: 143.706455ms)"],"step_count":2}
	{"level":"warn","ts":"2025-05-10T17:41:20.157610Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.603975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:20.157642Z","caller":"traceutil/trace.go:171","msg":"trace[1166239947] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1177; }","duration":"286.661216ms","start":"2025-05-10T17:41:19.870973Z","end":"2025-05-10T17:41:20.157634Z","steps":["trace[1166239947] 'agreement among raft nodes before linearized reading'  (duration: 286.603848ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:23.813100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.505354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:23.813143Z","caller":"traceutil/trace.go:171","msg":"trace[1816674603] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1199; }","duration":"134.577306ms","start":"2025-05-10T17:41:23.678555Z","end":"2025-05-10T17:41:23.813132Z","steps":["trace[1816674603] 'range keys from in-memory index tree'  (duration: 134.457412ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:32.441803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.46138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:41:32.442789Z","caller":"traceutil/trace.go:171","msg":"trace[52974085] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1250; }","duration":"262.516178ms","start":"2025-05-10T17:41:32.180255Z","end":"2025-05-10T17:41:32.442771Z","steps":["trace[52974085] 'range keys from in-memory index tree'  (duration: 261.391539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:41:32.442613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.488856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-05-10T17:41:32.442648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.249026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-05-10T17:41:32.446649Z","caller":"traceutil/trace.go:171","msg":"trace[2101719547] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1250; }","duration":"228.258228ms","start":"2025-05-10T17:41:32.218374Z","end":"2025-05-10T17:41:32.446632Z","steps":["trace[2101719547] 'count revisions from in-memory index tree'  (duration: 224.207737ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:41:32.446121Z","caller":"traceutil/trace.go:171","msg":"trace[1053638242] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1250; }","duration":"191.014934ms","start":"2025-05-10T17:41:32.255095Z","end":"2025-05-10T17:41:32.446110Z","steps":["trace[1053638242] 'range keys from in-memory index tree'  (duration: 187.308252ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:41:48.358385Z","caller":"traceutil/trace.go:171","msg":"trace[205082752] transaction","detail":"{read_only:false; response_revision:1345; number_of_response:1; }","duration":"279.543397ms","start":"2025-05-10T17:41:48.078826Z","end":"2025-05-10T17:41:48.358369Z","steps":["trace[205082752] 'process raft request'  (duration: 279.443575ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:43:29.998049Z","caller":"traceutil/trace.go:171","msg":"trace[2131074455] transaction","detail":"{read_only:false; response_revision:1613; number_of_response:1; }","duration":"273.363582ms","start":"2025-05-10T17:43:29.724655Z","end":"2025-05-10T17:43:29.998018Z","steps":["trace[2131074455] 'process raft request'  (duration: 272.877413ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:43:38.622297Z","caller":"traceutil/trace.go:171","msg":"trace[1907339207] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1655; }","duration":"226.096167ms","start":"2025-05-10T17:43:38.396185Z","end":"2025-05-10T17:43:38.622281Z","steps":["trace[1907339207] 'process raft request'  (duration: 225.94906ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T17:43:38.632702Z","caller":"traceutil/trace.go:171","msg":"trace[1703725001] transaction","detail":"{read_only:false; response_revision:1656; number_of_response:1; }","duration":"232.960343ms","start":"2025-05-10T17:43:38.399726Z","end":"2025-05-10T17:43:38.632687Z","steps":["trace[1703725001] 'process raft request'  (duration: 231.799352ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:47:15 up 7 min,  0 user,  load average: 0.36, 0.76, 0.47
	Linux addons-661496 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [f0e94709db4919a868c3abd359869dfc8ae3023971570e7c3cbef8615372a1c1] <==
	I0510 17:44:20.797097       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.183.199"}
	I0510 17:44:24.399305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:30.189628       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:32.360130       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:35.746711       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0510 17:44:36.779930       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0510 17:44:37.987030       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0510 17:44:38.180631       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.187.250"}
	I0510 17:44:38.187206       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:44:46.279208       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0510 17:45:03.407445       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.407816       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.429150       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.429610       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.449742       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.450344       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.476778       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.477089       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:45:03.542041       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:45:03.542322       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0510 17:45:04.429327       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0510 17:45:04.544379       1 cacher.go:183] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0510 17:45:04.567189       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W0510 17:45:04.627618       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0510 17:45:09.410104       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [ffb6f242cbf4cc00584479d6bca7b87ecafb6910608502978ef070cc8c3ac695] <==
	E0510 17:45:40.121859       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:45:42.113452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:45:42.528113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:45:43.474987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:45:43.579411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:45:43.731071       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:45:53.520166       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:45:54.361451       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:16.372618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:16.553011       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:17.535646       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:18.783706       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:24.238684       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:27.727749       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:28.331939       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:28.544836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:29.810782       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:40.591544       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:42.423510       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:46:59.296317       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:47:00.285147       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:47:01.044340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:47:03.274505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:47:03.544429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:47:12.944245       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0f62645c8df43b1446b6c83a4d18c64e9372efd243bd6f194bd8dfb229d9c803] <==
	E0510 17:40:21.832482       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:40:21.877419       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	E0510 17:40:21.877504       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:40:22.072548       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:40:22.072597       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:40:22.072629       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:40:22.122387       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:40:22.122740       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:40:22.122766       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:40:22.178604       1 config.go:199] "Starting service config controller"
	I0510 17:40:22.178637       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:40:22.178691       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:40:22.178707       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:40:22.178720       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:40:22.178734       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:40:22.179721       1 config.go:329] "Starting node config controller"
	I0510 17:40:22.179749       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:40:22.279288       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:40:22.279328       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:40:22.279353       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:40:22.279922       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5d41575d5f369fd9c9f0ce72b6cb8fa6a09f530a21e332c6cfa6dac44e671159] <==
	E0510 17:40:12.961204       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:40:12.961454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:40:12.962491       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:40:12.963658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:40:12.966283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:40:12.966322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:40:12.966336       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:40:12.966662       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:40:12.968435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 17:40:12.968472       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:40:12.968683       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 17:40:12.969321       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:40:12.969638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:40:13.818536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:40:13.880458       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:40:13.883032       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:40:13.890012       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:40:13.899207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:40:14.002440       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:40:14.097120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 17:40:14.150064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:40:14.248162       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:40:14.296088       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:40:14.334006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0510 17:40:16.047273       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 17:45:31 addons-661496 kubelet[1566]: E0510 17:45:31.816646    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:e246aa22ad2cbdfbd19e2a6ca2b275e26245a21920e2b2d0666324cee3f15549: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:45:38 addons-661496 kubelet[1566]: E0510 17:45:38.816203    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:45:49 addons-661496 kubelet[1566]: E0510 17:45:49.443718    1566 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	May 10 17:45:49 addons-661496 kubelet[1566]: E0510 17:45:49.443796    1566 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	May 10 17:45:49 addons-661496 kubelet[1566]: E0510 17:45:49.443998    1566 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq4hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(8cfaa910-fd77-46b3-81a2-e85c5ca6e000): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 17:45:49 addons-661496 kubelet[1566]: E0510 17:45:49.445332    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:45:49 addons-661496 kubelet[1566]: E0510 17:45:49.818395    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:45:52 addons-661496 kubelet[1566]: I0510 17:45:52.815037    1566 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v4gbz" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:46:00 addons-661496 kubelet[1566]: E0510 17:46:00.816957    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:46:02 addons-661496 kubelet[1566]: E0510 17:46:02.815994    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:46:14 addons-661496 kubelet[1566]: E0510 17:46:14.816468    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:46:15 addons-661496 kubelet[1566]: E0510 17:46:15.080756    1566 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	May 10 17:46:15 addons-661496 kubelet[1566]: E0510 17:46:15.080818    1566 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	May 10 17:46:15 addons-661496 kubelet[1566]: E0510 17:46:15.080980    1566 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5ztn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(fa098ebf-237d-4738-96c9-0bbde71445c1): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 17:46:15 addons-661496 kubelet[1566]: E0510 17:46:15.082183    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:46:28 addons-661496 kubelet[1566]: E0510 17:46:28.816920    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:46:28 addons-661496 kubelet[1566]: E0510 17:46:28.817484    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:46:31 addons-661496 kubelet[1566]: I0510 17:46:31.816091    1566 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:46:40 addons-661496 kubelet[1566]: E0510 17:46:40.816912    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:46:43 addons-661496 kubelet[1566]: E0510 17:46:43.816438    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:46:54 addons-661496 kubelet[1566]: E0510 17:46:54.816297    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:46:57 addons-661496 kubelet[1566]: E0510 17:46:57.816977    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	May 10 17:47:07 addons-661496 kubelet[1566]: I0510 17:47:07.815481    1566 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v4gbz" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:47:08 addons-661496 kubelet[1566]: E0510 17:47:08.816711    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="8cfaa910-fd77-46b3-81a2-e85c5ca6e000"
	May 10 17:47:09 addons-661496 kubelet[1566]: E0510 17:47:09.817137    1566 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fa098ebf-237d-4738-96c9-0bbde71445c1"
	
	
	==> storage-provisioner [5aa32f181c6fbb06b9466366d0447e0cb0a52b4c7da9597a4a94534d73372697] <==
	W0510 17:46:51.279384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:53.282587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:53.290700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:55.293651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:55.301382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:57.310113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:57.332999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:59.342724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:46:59.347669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:01.351395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:01.356232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:03.358777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:03.363698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:05.367426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:05.374814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:07.378577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:07.383842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:09.387381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:09.391768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:11.395279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:11.400726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:13.404418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:13.411694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:15.416002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:47:15.421166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-661496 -n addons-661496
helpers_test.go:261: (dbg) Run:  kubectl --context addons-661496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-661496 describe pod nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-661496 describe pod nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm: exit status 1 (73.880726ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-661496/192.168.39.168
	Start Time:       Sat, 10 May 2025 17:44:38 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:  10.244.0.35
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5ztn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5ztn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m38s                default-scheduler  Successfully assigned default/nginx to addons-661496
	  Normal   Pulling    64s (x4 over 2m38s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     61s (x4 over 2m36s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:65645c7bb6a0661892a8b03b89d0743208a18dd2f3f17a54ef4b76fb8e2f2a10: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     61s (x4 over 2m36s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x9 over 2m35s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7s (x9 over 2m35s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-661496/192.168.39.168
	Start Time:       Sat, 10 May 2025 17:44:13 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jq4hn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jq4hn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  3m3s                   default-scheduler  Successfully assigned default/test-local-path to addons-661496
	  Warning  Failed     2m14s (x2 over 2m42s)  kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:e246aa22ad2cbdfbd19e2a6ca2b275e26245a21920e2b2d0666324cee3f15549: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    90s (x4 over 3m2s)     kubelet            Pulling image "busybox:stable"
	  Warning  Failed     87s (x2 over 2m59s)    kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:ec15a5bd53cf9507beb851574654669e778a9735f8e605e0ee3d71fd07debbe1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     87s (x4 over 2m59s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x10 over 2m58s)    kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     8s (x10 over 2m58s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fhvxz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9d5dm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-661496 describe pod nginx test-local-path ingress-nginx-admission-create-fhvxz ingress-nginx-admission-patch-9d5dm: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.731913027s)
--- FAIL: TestAddons/parallel/LocalPath (231.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-691821 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-691821 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-691821 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-691821 --alsologtostderr -v=1] stderr:
I0510 17:58:53.383069 1182318 out.go:345] Setting OutFile to fd 1 ...
I0510 17:58:53.383195 1182318 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:58:53.383205 1182318 out.go:358] Setting ErrFile to fd 2...
I0510 17:58:53.383211 1182318 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:58:53.383398 1182318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
I0510 17:58:53.383624 1182318 mustload.go:65] Loading cluster: functional-691821
I0510 17:58:53.383965 1182318 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:58:53.384380 1182318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:58:53.384441 1182318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:58:53.400072 1182318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
I0510 17:58:53.400552 1182318 main.go:141] libmachine: () Calling .GetVersion
I0510 17:58:53.401123 1182318 main.go:141] libmachine: Using API Version  1
I0510 17:58:53.401144 1182318 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:58:53.401588 1182318 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:58:53.401805 1182318 main.go:141] libmachine: (functional-691821) Calling .GetState
I0510 17:58:53.403313 1182318 host.go:66] Checking if "functional-691821" exists ...
I0510 17:58:53.403644 1182318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:58:53.403693 1182318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:58:53.419179 1182318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39817
I0510 17:58:53.419665 1182318 main.go:141] libmachine: () Calling .GetVersion
I0510 17:58:53.420182 1182318 main.go:141] libmachine: Using API Version  1
I0510 17:58:53.420212 1182318 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:58:53.420569 1182318 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:58:53.420775 1182318 main.go:141] libmachine: (functional-691821) Calling .DriverName
I0510 17:58:53.420930 1182318 api_server.go:166] Checking apiserver status ...
I0510 17:58:53.421011 1182318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0510 17:58:53.421036 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHHostname
I0510 17:58:53.423901 1182318 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:58:53.424321 1182318 main.go:141] libmachine: (functional-691821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:2e:2b", ip: ""} in network mk-functional-691821: {Iface:virbr1 ExpiryTime:2025-05-10 18:55:31 +0000 UTC Type:0 Mac:52:54:00:9f:2e:2b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-691821 Clientid:01:52:54:00:9f:2e:2b}
I0510 17:58:53.424350 1182318 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined IP address 192.168.39.96 and MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:58:53.424490 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHPort
I0510 17:58:53.424680 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHKeyPath
I0510 17:58:53.424807 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHUsername
I0510 17:58:53.424923 1182318 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/functional-691821/id_rsa Username:docker}
I0510 17:58:53.523046 1182318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5239/cgroup
W0510 17:58:53.533344 1182318 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5239/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0510 17:58:53.533437 1182318 ssh_runner.go:195] Run: ls
I0510 17:58:53.537792 1182318 api_server.go:253] Checking apiserver healthz at https://192.168.39.96:8441/healthz ...
I0510 17:58:53.542755 1182318 api_server.go:279] https://192.168.39.96:8441/healthz returned 200:
ok
W0510 17:58:53.542802 1182318 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0510 17:58:53.542961 1182318 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:58:53.542977 1182318 addons.go:69] Setting dashboard=true in profile "functional-691821"
I0510 17:58:53.542986 1182318 addons.go:238] Setting addon dashboard=true in "functional-691821"
I0510 17:58:53.543012 1182318 host.go:66] Checking if "functional-691821" exists ...
I0510 17:58:53.543280 1182318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:58:53.543316 1182318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:58:53.559230 1182318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45689
I0510 17:58:53.560082 1182318 main.go:141] libmachine: () Calling .GetVersion
I0510 17:58:53.561480 1182318 main.go:141] libmachine: Using API Version  1
I0510 17:58:53.561508 1182318 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:58:53.561895 1182318 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:58:53.562415 1182318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:58:53.562476 1182318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:58:53.578058 1182318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
I0510 17:58:53.578593 1182318 main.go:141] libmachine: () Calling .GetVersion
I0510 17:58:53.579198 1182318 main.go:141] libmachine: Using API Version  1
I0510 17:58:53.579226 1182318 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:58:53.579563 1182318 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:58:53.579748 1182318 main.go:141] libmachine: (functional-691821) Calling .GetState
I0510 17:58:53.581437 1182318 main.go:141] libmachine: (functional-691821) Calling .DriverName
I0510 17:58:53.583822 1182318 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0510 17:58:53.585147 1182318 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0510 17:58:53.586415 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0510 17:58:53.586448 1182318 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0510 17:58:53.586471 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHHostname
I0510 17:58:53.590293 1182318 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:58:53.590726 1182318 main.go:141] libmachine: (functional-691821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:2e:2b", ip: ""} in network mk-functional-691821: {Iface:virbr1 ExpiryTime:2025-05-10 18:55:31 +0000 UTC Type:0 Mac:52:54:00:9f:2e:2b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-691821 Clientid:01:52:54:00:9f:2e:2b}
I0510 17:58:53.590751 1182318 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined IP address 192.168.39.96 and MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:58:53.590924 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHPort
I0510 17:58:53.591248 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHKeyPath
I0510 17:58:53.591467 1182318 main.go:141] libmachine: (functional-691821) Calling .GetSSHUsername
I0510 17:58:53.591648 1182318 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/functional-691821/id_rsa Username:docker}
I0510 17:58:53.708913 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0510 17:58:53.708952 1182318 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0510 17:58:53.732863 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0510 17:58:53.732890 1182318 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0510 17:58:53.768412 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0510 17:58:53.768442 1182318 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0510 17:58:53.809767 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0510 17:58:53.809796 1182318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0510 17:58:53.834633 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0510 17:58:53.834657 1182318 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0510 17:58:53.876577 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0510 17:58:53.876608 1182318 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0510 17:58:53.913609 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0510 17:58:53.913647 1182318 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0510 17:58:53.940963 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0510 17:58:53.940998 1182318 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0510 17:58:53.981506 1182318 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0510 17:58:53.981539 1182318 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0510 17:58:54.016544 1182318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0510 17:58:54.874878 1182318 main.go:141] libmachine: Making call to close driver server
I0510 17:58:54.874905 1182318 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:58:54.875286 1182318 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:58:54.875307 1182318 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:58:54.875316 1182318 main.go:141] libmachine: Making call to close driver server
I0510 17:58:54.875323 1182318 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:58:54.875566 1182318 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:58:54.875611 1182318 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:58:54.875652 1182318 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:58:54.877330 1182318 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-691821 addons enable metrics-server

                                                
                                                
I0510 17:58:54.878789 1182318 addons.go:201] Writing out "functional-691821" config to set dashboard=true...
W0510 17:58:54.879038 1182318 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0510 17:58:54.879699 1182318 kapi.go:59] client config for functional-691821: &rest.Config{Host:"https://192.168.39.96:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.key", CAFile:"/home/jenkins/minikube-integration/20720-1165049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0510 17:58:54.880200 1182318 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0510 17:58:54.880228 1182318 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0510 17:58:54.880241 1182318 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0510 17:58:54.880252 1182318 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0510 17:58:54.889938 1182318 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  4aabfe8b-92ab-4a7e-9b7b-fbca40b28b86 899 0 2025-05-10 17:58:54 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-05-10 17:58:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.148.186,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.148.186],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0510 17:58:54.890081 1182318 out.go:270] * Launching proxy ...
* Launching proxy ...
I0510 17:58:54.890149 1182318 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-691821 proxy --port 36195]
I0510 17:58:54.890415 1182318 dashboard.go:157] Waiting for kubectl to output host:port ...
I0510 17:58:54.933294 1182318 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0510 17:58:54.933343 1182318 out.go:270] * Verifying proxy health ...
* Verifying proxy health ...
I0510 17:58:54.941893 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[efbd0626-33e3-4cb7-9620-6880ff7e6646] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000245680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006acdc0 TLS:<nil>}
I0510 17:58:54.941981 1182318 retry.go:31] will retry after 50.363µs: Temporary Error: unexpected response code: 503
I0510 17:58:54.945831 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49febb68-2f34-4ca3-9590-11a47922bf43] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000097540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000640780 TLS:<nil>}
I0510 17:58:54.945897 1182318 retry.go:31] will retry after 210.108µs: Temporary Error: unexpected response code: 503
I0510 17:58:54.949251 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[74e6f1f6-39ee-45df-bfbc-8c311ab6d516] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000514140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006acf00 TLS:<nil>}
I0510 17:58:54.949311 1182318 retry.go:31] will retry after 304.872µs: Temporary Error: unexpected response code: 503
I0510 17:58:54.952353 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a68a112-4f64-4e99-975b-17de47756b1b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000097d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006408c0 TLS:<nil>}
I0510 17:58:54.952390 1182318 retry.go:31] will retry after 191.966µs: Temporary Error: unexpected response code: 503
I0510 17:58:54.955479 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[26469dee-c039-4ee6-83a6-f028a8e5e58d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc0005143c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006ad040 TLS:<nil>}
I0510 17:58:54.955526 1182318 retry.go:31] will retry after 463.325µs: Temporary Error: unexpected response code: 503
I0510 17:58:54.958411 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1c84d305-3a60-48cb-a615-c7c91a27f668] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000940040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000640a00 TLS:<nil>}
I0510 17:58:54.958468 1182318 retry.go:31] will retry after 1.074623ms: Temporary Error: unexpected response code: 503
I0510 17:58:54.961371 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1b33a4be-f97c-40b2-8413-94f20b3b23e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000515640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cc500 TLS:<nil>}
I0510 17:58:54.961439 1182318 retry.go:31] will retry after 957.864µs: Temporary Error: unexpected response code: 503
I0510 17:58:54.968595 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ece5b492-1c47-42e4-943b-e7d13a81325d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000940100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000640b40 TLS:<nil>}
I0510 17:58:54.968672 1182318 retry.go:31] will retry after 2.540808ms: Temporary Error: unexpected response code: 503
I0510 17:58:54.978342 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[111dc770-d6c3-4742-a5cd-87c9f759fc0a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000940200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cc8c0 TLS:<nil>}
I0510 17:58:54.978392 1182318 retry.go:31] will retry after 3.625219ms: Temporary Error: unexpected response code: 503
I0510 17:58:54.987189 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d8e41fb9-c732-4384-ad47-8ede05cbae4f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000515dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cca00 TLS:<nil>}
I0510 17:58:54.987254 1182318 retry.go:31] will retry after 5.677714ms: Temporary Error: unexpected response code: 503
I0510 17:58:54.995879 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c50d497c-59bd-449b-9d27-3517fff62565] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc000971080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000640c80 TLS:<nil>}
I0510 17:58:54.995931 1182318 retry.go:31] will retry after 4.773663ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.003643 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6a76e992-8274-4e9c-bde8-af2c143e0feb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:54 GMT]] Body:0xc0009b60c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006ad180 TLS:<nil>}
I0510 17:58:55.003715 1182318 retry.go:31] will retry after 9.494493ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.016406 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[576fada1-8b8b-49d2-8c4e-65729782c737] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000940340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000640f00 TLS:<nil>}
I0510 17:58:55.016465 1182318 retry.go:31] will retry after 15.090074ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.035630 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e58d8cf2-baa6-4d3c-a532-98fadf05ae40] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc0009b6240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ccc80 TLS:<nil>}
I0510 17:58:55.035695 1182318 retry.go:31] will retry after 17.653601ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.058343 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[133cf989-2fb2-45cd-a8f2-86c4a08836ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000940440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000641040 TLS:<nil>}
I0510 17:58:55.058410 1182318 retry.go:31] will retry after 15.681544ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.077749 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5999cafd-f160-41ad-99c8-73ca1446c42f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000971440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ccdc0 TLS:<nil>}
I0510 17:58:55.077842 1182318 retry.go:31] will retry after 56.715456ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.138187 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c7647e9-cfda-40d6-94ff-711b9d0ba27b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000971540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006ad400 TLS:<nil>}
I0510 17:58:55.138254 1182318 retry.go:31] will retry after 36.606394ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.178529 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[352a0cf0-e1d4-4409-9b81-9dde59b93de1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000940540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006ad7c0 TLS:<nil>}
I0510 17:58:55.178608 1182318 retry.go:31] will retry after 71.173648ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.253786 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f672b5b1-f4bf-4cf8-8c7e-e409c831c646] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000940600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cd2c0 TLS:<nil>}
I0510 17:58:55.253878 1182318 retry.go:31] will retry after 192.626128ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.450156 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[282be72b-cc74-4024-94a2-6f227f026d4c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc0009b6440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cd680 TLS:<nil>}
I0510 17:58:55.450240 1182318 retry.go:31] will retry after 268.156788ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.721665 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db50a02e-6b89-4e3e-a068-1ef93dd039d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000940680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000641180 TLS:<nil>}
I0510 17:58:55.721737 1182318 retry.go:31] will retry after 243.799419ms: Temporary Error: unexpected response code: 503
I0510 17:58:55.969845 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[264b6382-be4a-4657-9fcb-4b8f9dc554d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:55 GMT]] Body:0xc000971700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cd7c0 TLS:<nil>}
I0510 17:58:55.969913 1182318 retry.go:31] will retry after 503.148512ms: Temporary Error: unexpected response code: 503
I0510 17:58:56.476904 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[237aa57f-68d0-49d1-8a2c-311c302c268d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:56 GMT]] Body:0xc0009b6600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006ad900 TLS:<nil>}
I0510 17:58:56.476975 1182318 retry.go:31] will retry after 977.303065ms: Temporary Error: unexpected response code: 503
I0510 17:58:57.458313 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eb5736f1-1257-445d-ab08-81d173903961] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:57 GMT]] Body:0xc000940740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006412c0 TLS:<nil>}
I0510 17:58:57.458386 1182318 retry.go:31] will retry after 1.227554086s: Temporary Error: unexpected response code: 503
I0510 17:58:58.690161 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4acc72bf-c7d2-4b91-b11c-a3c70abc0ea6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:58:58 GMT]] Body:0xc0009b6740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cda40 TLS:<nil>}
I0510 17:58:58.690239 1182318 retry.go:31] will retry after 1.381718709s: Temporary Error: unexpected response code: 503
I0510 17:59:00.075324 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06ea43a2-9879-4b38-ae7e-e64c208a8026] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:59:00 GMT]] Body:0xc000940840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000641400 TLS:<nil>}
I0510 17:59:00.075428 1182318 retry.go:31] will retry after 1.678428293s: Temporary Error: unexpected response code: 503
I0510 17:59:01.759048 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ab63052-bf21-41a1-9f09-ecd156499116] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:59:01 GMT]] Body:0xc0009b6900 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cdb80 TLS:<nil>}
I0510 17:59:01.759132 1182318 retry.go:31] will retry after 3.98495469s: Temporary Error: unexpected response code: 503
I0510 17:59:05.747505 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c6e1fbb-a2cf-4986-8756-e173aa717ac9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:59:05 GMT]] Body:0xc0009b7200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cdcc0 TLS:<nil>}
I0510 17:59:05.747577 1182318 retry.go:31] will retry after 4.315174005s: Temporary Error: unexpected response code: 503
I0510 17:59:10.068045 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cab74daa-a57d-4e62-9f13-afda4939ab6f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:59:10 GMT]] Body:0xc0009409c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000641540 TLS:<nil>}
I0510 17:59:10.068125 1182318 retry.go:31] will retry after 10.407407919s: Temporary Error: unexpected response code: 503
I0510 17:59:20.480057 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16959598-d8ca-4280-a0bc-4c615df53ff6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:59:20 GMT]] Body:0xc0009b7340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004cde00 TLS:<nil>}
I0510 17:59:20.480127 1182318 retry.go:31] will retry after 15.431173935s: Temporary Error: unexpected response code: 503
I0510 17:59:35.914671 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84bf8b21-dcee-4b1f-b866-32ac231d9d58] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:59:35 GMT]] Body:0xc000971800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000641680 TLS:<nil>}
I0510 17:59:35.914762 1182318 retry.go:31] will retry after 16.255620173s: Temporary Error: unexpected response code: 503
I0510 17:59:52.177388 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8ae955f9-8cf6-4dca-aa6d-f9e02a328de7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:59:52 GMT]] Body:0xc0009b7600 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a18000 TLS:<nil>}
I0510 17:59:52.177456 1182318 retry.go:31] will retry after 35.596264865s: Temporary Error: unexpected response code: 503
I0510 18:00:27.779314 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e7492de-6a00-4a82-9527-6b2c255a58f7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:00:27 GMT]] Body:0xc000940a80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006417c0 TLS:<nil>}
I0510 18:00:27.779396 1182318 retry.go:31] will retry after 56.605833922s: Temporary Error: unexpected response code: 503
I0510 18:01:24.389381 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c133788d-bf97-4521-8ac1-b2ad40999c12] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:01:24 GMT]] Body:0xc00068c380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a18140 TLS:<nil>}
I0510 18:01:24.389477 1182318 retry.go:31] will retry after 1m21.591165414s: Temporary Error: unexpected response code: 503
I0510 18:02:45.984168 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53f00f5d-d737-433e-847c-fae9eddaf537] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:02:45 GMT]] Body:0xc00068c400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000640140 TLS:<nil>}
I0510 18:02:45.984258 1182318 retry.go:31] will retry after 47.458631167s: Temporary Error: unexpected response code: 503
I0510 18:03:33.447030 1182318 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ca25e4dd-2db8-49cc-9836-3b012905dccb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 18:03:33 GMT]] Body:0xc000b460c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a18280 TLS:<nil>}
I0510 18:03:33.447129 1182318 retry.go:31] will retry after 44.987440428s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-691821 -n functional-691821
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 logs -n 25: (1.381869006s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-691821 ssh stat                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh sudo                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port4113061102/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh -- ls                                              | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh sudo                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh pgrep                                              | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-691821 image build -t                                         | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | localhost/my-image:functional-691821                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-691821 image ls                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:58:53
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:58:53.248483 1182290 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:58:53.248630 1182290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.248640 1182290 out.go:358] Setting ErrFile to fd 2...
	I0510 17:58:53.248647 1182290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.248827 1182290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:58:53.249383 1182290 out.go:352] Setting JSON to false
	I0510 17:58:53.250412 1182290 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":20477,"bootTime":1746879456,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:58:53.250511 1182290 start.go:140] virtualization: kvm guest
	I0510 17:58:53.252374 1182290 out.go:177] * [functional-691821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:58:53.253636 1182290 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:58:53.253653 1182290 notify.go:220] Checking for updates...
	I0510 17:58:53.256186 1182290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:58:53.257433 1182290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:58:53.258882 1182290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:58:53.259972 1182290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:58:53.261113 1182290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:58:53.262700 1182290 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:58:53.263147 1182290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.263200 1182290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.279049 1182290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0510 17:58:53.279474 1182290 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.279900 1182290 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.279945 1182290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.280307 1182290 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.280513 1182290 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.280799 1182290 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:58:53.281084 1182290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.281120 1182290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.296869 1182290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0510 17:58:53.297413 1182290 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.297937 1182290 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.297959 1182290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.298294 1182290 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.298467 1182290 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.331941 1182290 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 17:58:53.333118 1182290 start.go:304] selected driver: kvm2
	I0510 17:58:53.333135 1182290 start.go:908] validating driver "kvm2" against &{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:53.333262 1182290 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:58:53.334422 1182290 cni.go:84] Creating CNI manager for ""
	I0510 17:58:53.334481 1182290 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:58:53.334534 1182290 start.go:347] cluster config:
	{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:53.335966 1182290 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e817147458eb6       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   c60694e2ae4a9       busybox-mount
	cf9b4f2bda584       82e4c8a736a4f       5 minutes ago       Running             echoserver                0                   df87dfa2642c3       hello-node-fcfd88b6f-dmfws
	3cf8f0a97cccb       82e4c8a736a4f       5 minutes ago       Running             echoserver                0                   07de351a73ba2       hello-node-connect-58f9cf68d8-s9rfl
	e2d282b25a3fc       6e38f40d628db       5 minutes ago       Running             storage-provisioner       5                   16968ce74e551       storage-provisioner
	8d326984f158b       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       4                   16968ce74e551       storage-provisioner
	64c36ba7cc484       f1184a0bd7fe5       5 minutes ago       Running             kube-proxy                2                   b184287cd85b5       kube-proxy-64t85
	a07d63afac120       1cf5f116067c6       5 minutes ago       Running             coredns                   2                   18e2a2dc46b0b       coredns-674b8bbfcf-9frgq
	2216f7cab388d       6ba9545b2183e       5 minutes ago       Running             kube-apiserver            0                   1cf595b2025dc       kube-apiserver-functional-691821
	050e79a4fc59d       1d579cb6d6967       5 minutes ago       Running             kube-controller-manager   3                   84647453de7f2       kube-controller-manager-functional-691821
	b989fd8fae3bc       8d72586a76469       5 minutes ago       Running             kube-scheduler            2                   ff668932218ea       kube-scheduler-functional-691821
	75c55deac05d6       499038711c081       5 minutes ago       Running             etcd                      2                   b4a45e539be46       etcd-functional-691821
	ae0cf2a6aa1d6       1d579cb6d6967       6 minutes ago       Exited              kube-controller-manager   2                   84647453de7f2       kube-controller-manager-functional-691821
	2c82f924ca0f1       8d72586a76469       6 minutes ago       Exited              kube-scheduler            1                   ff668932218ea       kube-scheduler-functional-691821
	856acec20060e       499038711c081       6 minutes ago       Exited              etcd                      1                   b4a45e539be46       etcd-functional-691821
	c37c65e582838       1cf5f116067c6       6 minutes ago       Exited              coredns                   1                   18e2a2dc46b0b       coredns-674b8bbfcf-9frgq
	21669a32b7e33       f1184a0bd7fe5       6 minutes ago       Exited              kube-proxy                1                   b184287cd85b5       kube-proxy-64t85
	
	
	==> containerd <==
	May 10 18:00:30 functional-691821 containerd[4313]: time="2025-05-10T18:00:30.546899122Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 18:00:30 functional-691821 containerd[4313]: time="2025-05-10T18:00:30.550082076Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:31 functional-691821 containerd[4313]: time="2025-05-10T18:00:31.126187566Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:32 functional-691821 containerd[4313]: time="2025-05-10T18:00:32.785822707Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:00:32 functional-691821 containerd[4313]: time="2025-05-10T18:00:32.786367018Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	May 10 18:01:31 functional-691821 containerd[4313]: time="2025-05-10T18:01:31.546615408Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	May 10 18:01:31 functional-691821 containerd[4313]: time="2025-05-10T18:01:31.549497536Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:32 functional-691821 containerd[4313]: time="2025-05-10T18:01:32.151208487Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:33 functional-691821 containerd[4313]: time="2025-05-10T18:01:33.808755574Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	May 10 18:01:33 functional-691821 containerd[4313]: time="2025-05-10T18:01:33.808859759Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:01:44 functional-691821 containerd[4313]: time="2025-05-10T18:01:44.548424304Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	May 10 18:01:44 functional-691821 containerd[4313]: time="2025-05-10T18:01:44.551415753Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:45 functional-691821 containerd[4313]: time="2025-05-10T18:01:45.154892540Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:46 functional-691821 containerd[4313]: time="2025-05-10T18:01:46.823476980Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:01:46 functional-691821 containerd[4313]: time="2025-05-10T18:01:46.823605522Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10966"
	May 10 18:01:58 functional-691821 containerd[4313]: time="2025-05-10T18:01:58.554868073Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:01:58 functional-691821 containerd[4313]: time="2025-05-10T18:01:58.557798058Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:59 functional-691821 containerd[4313]: time="2025-05-10T18:01:59.148759502Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:02:00 functional-691821 containerd[4313]: time="2025-05-10T18:02:00.801502649Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:02:00 functional-691821 containerd[4313]: time="2025-05-10T18:02:00.801599807Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	May 10 18:02:00 functional-691821 containerd[4313]: time="2025-05-10T18:02:00.803411478Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 18:02:00 functional-691821 containerd[4313]: time="2025-05-10T18:02:00.805790262Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:02:01 functional-691821 containerd[4313]: time="2025-05-10T18:02:01.406258787Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:02:03 functional-691821 containerd[4313]: time="2025-05-10T18:02:03.052332072Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:02:03 functional-691821 containerd[4313]: time="2025-05-10T18:02:03.052457819Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [a07d63afac1208e5d7b51664838271e7c178dc2599c84b9861fe7d38718ec2f0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:56700 - 25698 "HINFO IN 1577412054363027221.8669343729915223077. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033419639s
	
	
	==> coredns [c37c65e582838a84367d5743a692b4338f105733d6e1734cf7d38f0aebfeb391] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:53064 - 49758 "HINFO IN 1882376968732920025.5461233709581115618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025064626s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-691821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-691821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-691821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_56_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:55:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-691821
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    functional-691821
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6b92320f3b244d19d8de03c98e3fa63
	  System UUID:                a6b92320-f3b2-44d1-9d8d-e03c98e3fa63
	  Boot ID:                    66acc6ce-de22-402d-970e-cba38e9f4da1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-s9rfl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  default                     hello-node-fcfd88b6f-dmfws                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     mysql-58ccfd96bb-zxjxx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    5m19s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 coredns-674b8bbfcf-9frgq                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m48s
	  kube-system                 etcd-functional-691821                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m53s
	  kube-system                 kube-apiserver-functional-691821              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-controller-manager-functional-691821     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-proxy-64t85                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 kube-scheduler-functional-691821              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-qgdvz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-bm8x7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m47s                  kube-proxy       
	  Normal  Starting                 5m43s                  kube-proxy       
	  Normal  Starting                 6m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m (x8 over 8m)        kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m (x8 over 8m)        kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m (x7 over 8m)        kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m54s                  kubelet          Starting kubelet.
	  Normal  NodeReady                7m53s                  kubelet          Node functional-691821 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m53s                  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m53s                  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s                  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m50s                  node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	  Normal  NodeHasNoDiskPressure    6m39s (x8 over 6m39s)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m39s (x8 over 6m39s)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m39s (x7 over 6m39s)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m33s                  node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	  Normal  Starting                 5m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m48s (x8 over 5m48s)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s (x8 over 5m48s)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s (x7 over 5m48s)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m41s                  node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	
	
	==> dmesg <==
	[  +0.004996] (rpcbind)[142]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.123380] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085397] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117100] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.094887] kauditd_printk_skb: 46 callbacks suppressed
	[May10 17:56] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.988309] kauditd_printk_skb: 19 callbacks suppressed
	[ +30.492392] kauditd_printk_skb: 77 callbacks suppressed
	[  +0.840875] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.078180] kauditd_printk_skb: 13 callbacks suppressed
	[May10 17:57] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.258728] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.802612] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.127631] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.942984] kauditd_printk_skb: 81 callbacks suppressed
	[May10 17:58] kauditd_printk_skb: 10 callbacks suppressed
	[  +4.185734] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.522787] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.211923] kauditd_printk_skb: 33 callbacks suppressed
	[  +1.856047] kauditd_printk_skb: 19 callbacks suppressed
	[  +4.809920] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 25 callbacks suppressed
	[  +2.936875] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [75c55deac05d69f20fa045dc7299aba1671d3f146746315886d4df3296540b21] <==
	{"level":"info","ts":"2025-05-10T17:57:59.812159Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T17:57:59.812578Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"d4b4d4eeb3ae7df8","initial-advertise-peer-urls":["https://192.168.39.96:2380"],"listen-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T17:57:59.812341Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:59.813505Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:59.813384Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T17:58:01.584903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.584970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.585064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgPreVoteResp from d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.585095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgVoteResp from d4b4d4eeb3ae7df8 at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became leader at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4b4d4eeb3ae7df8 elected leader d4b4d4eeb3ae7df8 at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.586844Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"d4b4d4eeb3ae7df8","local-member-attributes":"{Name:functional-691821 ClientURLs:[https://192.168.39.96:2379]}","request-path":"/0/members/d4b4d4eeb3ae7df8/attributes","cluster-id":"f38f0aa72455c2b8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:58:01.586892Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:58:01.587090Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:58:01.587920Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:58:01.588064Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:58:01.588935Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:58:01.588969Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:58:01.589441Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:58:01.590850Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.96:2379"}
	{"level":"warn","ts":"2025-05-10T17:58:54.157352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.93852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:58:54.157432Z","caller":"traceutil/trace.go:171","msg":"trace[919417013] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:845; }","duration":"189.134784ms","start":"2025-05-10T17:58:53.968285Z","end":"2025-05-10T17:58:54.157420Z","steps":["trace[919417013] 'range keys from in-memory index tree'  (duration: 188.861938ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:58:54.157604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.022159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:58:54.157622Z","caller":"traceutil/trace.go:171","msg":"trace[935324799] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:845; }","duration":"148.082091ms","start":"2025-05-10T17:58:54.009534Z","end":"2025-05-10T17:58:54.157616Z","steps":["trace[935324799] 'range keys from in-memory index tree'  (duration: 147.943061ms)"],"step_count":1}
	
	
	==> etcd [856acec20060e53c35110df29d6d2e6f031d2518c6fe6d8566922535e7deaeef] <==
	{"level":"info","ts":"2025-05-10T17:57:05.824420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:57:05.824579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgPreVoteResp from d4b4d4eeb3ae7df8 at term 2"}
	{"level":"info","ts":"2025-05-10T17:57:05.824708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.824830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgVoteResp from d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.824991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.825157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4b4d4eeb3ae7df8 elected leader d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.832239Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"d4b4d4eeb3ae7df8","local-member-attributes":"{Name:functional-691821 ClientURLs:[https://192.168.39.96:2379]}","request-path":"/0/members/d4b4d4eeb3ae7df8/attributes","cluster-id":"f38f0aa72455c2b8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:57:05.832292Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:57:05.832739Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:57:05.832785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:57:05.832329Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:57:05.833673Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:57:05.833841Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:57:05.834554Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:57:05.834573Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.96:2379"}
	{"level":"info","ts":"2025-05-10T17:57:58.744115Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T17:57:58.744232Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-691821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	{"level":"info","ts":"2025-05-10T17:57:58.745876Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4b4d4eeb3ae7df8","current-leader-member-id":"d4b4d4eeb3ae7df8"}
	{"level":"warn","ts":"2025-05-10T17:57:58.745956Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.745986Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.746029Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.746069Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T17:57:58.749474Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:58.749584Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:58.749593Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-691821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	
	
	==> kernel <==
	 18:03:54 up 8 min,  0 user,  load average: 0.10, 0.34, 0.24
	Linux functional-691821 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [2216f7cab388d023cba0dd87098e5d38a47d25ab30366eae1ececa307d730989] <==
	I0510 17:58:09.640700       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0510 17:58:10.414079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 17:58:10.567267       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0510 17:58:10.834368       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.96]
	I0510 17:58:10.835564       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 17:58:10.843158       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 17:58:11.246747       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 17:58:11.283333       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 17:58:11.312191       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 17:58:11.317941       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 17:58:12.834577       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:13.079769       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 17:58:30.131589       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:30.144804       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.188.157"}
	I0510 17:58:33.594676       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:35.152296       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.132.59"}
	I0510 17:58:35.157933       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:38.312353       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:38.322991       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.235.170"}
	I0510 17:58:43.513813       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:43.523104       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.188.137"}
	I0510 17:58:54.528699       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 17:58:54.828881       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.148.186"}
	I0510 17:58:54.832290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:54.864279       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.126.207"}
	
	
	==> kube-controller-manager [050e79a4fc59dac114131acb37aa8a9d9a80425a2779cd3da14fb462c0946fe2] <==
	I0510 17:58:13.106087       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:58:13.106227       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 17:58:13.106357       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 17:58:13.106374       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:58:13.111960       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:58:13.115883       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 17:58:13.125966       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:58:13.138270       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 17:58:13.155184       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:58:13.155214       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:58:13.155636       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:58:13.155809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-691821"
	I0510 17:58:13.155895       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:58:13.156092       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:58:13.541983       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:58:13.553356       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:58:13.553577       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:58:13.553822       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0510 17:58:54.626700       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.651543       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.651688       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.673886       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.674274       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.683624       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.685729       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ae0cf2a6aa1d620418f5d84d42322a622df1f38d99ad5e0e4630991991a563d7] <==
	I0510 17:57:21.220606       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 17:57:21.220836       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0510 17:57:21.224274       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:57:21.226669       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 17:57:21.226976       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 17:57:21.228359       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 17:57:21.228611       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 17:57:21.229897       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 17:57:21.231063       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:57:21.232236       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 17:57:21.273439       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:57:21.321261       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0510 17:57:21.425200       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:57:21.425661       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:57:21.426717       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-691821"
	I0510 17:57:21.426926       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:57:21.453313       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 17:57:21.528653       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:57:21.532160       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:57:21.568787       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:57:21.570183       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:57:21.926438       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:57:21.926468       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:57:21.926474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 17:57:21.953665       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [21669a32b7e33862427a098afcd9bfcdebe112b21a94c5be46fc56fb52098d7f] <==
	E0510 17:56:57.550398       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:57:05.957863       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.96"]
	E0510 17:57:05.958240       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:57:06.026514       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:57:06.026753       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:57:06.026899       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:57:06.053117       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:57:06.056696       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:57:06.056725       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:57:06.064233       1 config.go:199] "Starting service config controller"
	I0510 17:57:06.064270       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:57:06.064300       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:57:06.064304       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:57:06.064332       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:57:06.064350       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:57:06.065114       1 config.go:329] "Starting node config controller"
	I0510 17:57:06.065135       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:57:06.165180       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:57:06.165469       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:57:06.165991       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:57:06.166076       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [64c36ba7cc4847ab2180b08ff83121ec785778882563a18aa707bbb1b259e14f] <==
	E0510 17:58:11.107097       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:58:11.119418       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.96"]
	E0510 17:58:11.119678       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:58:11.171387       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:58:11.171414       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:58:11.171434       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:58:11.182345       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:58:11.182537       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:58:11.182549       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:58:11.183639       1 config.go:199] "Starting service config controller"
	I0510 17:58:11.183654       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:58:11.188921       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:58:11.188930       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:58:11.188944       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:58:11.188947       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:58:11.189192       1 config.go:329] "Starting node config controller"
	I0510 17:58:11.189198       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:58:11.284157       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:58:11.289777       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 17:58:11.289965       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:58:11.294077       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2c82f924ca0f1671f3d7fc7bb20eb550e3f9947834f82c3af76c731e5da0367a] <==
	I0510 17:57:04.671506       1 serving.go:386] Generated self-signed cert in-memory
	I0510 17:57:06.395354       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:57:06.395446       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:57:06.399932       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0510 17:57:06.400202       1 shared_informer.go:350] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0510 17:57:06.400413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.400449       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.400550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:57:06.400585       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:57:06.400912       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:57:06.401186       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:57:06.500481       1 shared_informer.go:357] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0510 17:57:06.500698       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.500850       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E0510 17:57:17.930452       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:57:17.989860       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:57:17.992115       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0510 17:57:58.684806       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0510 17:57:58.684880       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0510 17:57:58.684992       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b989fd8fae3bc05f3b514c7d418b3c6bcae98472c1c94d9d0517ea04b19f21a0] <==
	E0510 17:58:04.093433       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.96:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:58:04.274503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.96:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:58:04.375831       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.96:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:58:04.404870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:58:04.438325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:58:04.457488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.96:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:58:04.560877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.96:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:58:04.726493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:58:04.820850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 17:58:04.969521       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.96:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 17:58:04.978783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.96:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:58:05.095327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.96:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:58:05.151575       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.96:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:58:09.532438       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:58:09.532808       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:58:09.532977       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:58:09.533244       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:58:09.533305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:58:09.533354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:58:09.534120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:58:09.534399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 17:58:09.534435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:58:09.532538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:58:09.544324       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0510 17:58:10.211829       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 18:02:36 functional-691821 kubelet[5093]: E0510 18:02:36.546667    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:02:39 functional-691821 kubelet[5093]: E0510 18:02:39.545839    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:02:42 functional-691821 kubelet[5093]: E0510 18:02:42.546862    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:02:45 functional-691821 kubelet[5093]: E0510 18:02:45.546131    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:02:48 functional-691821 kubelet[5093]: E0510 18:02:48.546056    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:02:52 functional-691821 kubelet[5093]: E0510 18:02:52.546919    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:02:54 functional-691821 kubelet[5093]: E0510 18:02:54.547446    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:02:57 functional-691821 kubelet[5093]: E0510 18:02:57.545843    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:03:03 functional-691821 kubelet[5093]: E0510 18:03:03.545948    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:03:06 functional-691821 kubelet[5093]: E0510 18:03:06.546483    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:03:07 functional-691821 kubelet[5093]: E0510 18:03:07.546555    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:03:11 functional-691821 kubelet[5093]: E0510 18:03:11.546652    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:03:18 functional-691821 kubelet[5093]: E0510 18:03:18.545994    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:03:19 functional-691821 kubelet[5093]: E0510 18:03:19.546962    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:03:20 functional-691821 kubelet[5093]: E0510 18:03:20.546330    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:03:22 functional-691821 kubelet[5093]: E0510 18:03:22.548703    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:03:30 functional-691821 kubelet[5093]: E0510 18:03:30.545481    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:03:30 functional-691821 kubelet[5093]: E0510 18:03:30.547566    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:03:31 functional-691821 kubelet[5093]: E0510 18:03:31.545823    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:03:37 functional-691821 kubelet[5093]: E0510 18:03:37.546853    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:03:41 functional-691821 kubelet[5093]: E0510 18:03:41.545407    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:03:41 functional-691821 kubelet[5093]: E0510 18:03:41.546230    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:03:44 functional-691821 kubelet[5093]: E0510 18:03:44.547566    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:03:51 functional-691821 kubelet[5093]: E0510 18:03:51.546074    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:03:52 functional-691821 kubelet[5093]: E0510 18:03:52.546252    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	
	
	==> storage-provisioner [8d326984f158b0b8759ee8a0dc7c4228ac980139324c7521604b634092b67fb2] <==
	I0510 17:58:10.974678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:58:10.978170       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e2d282b25a3fc112e61c9297ab3312334703edf6d6cc15e2744eb97494496030] <==
	W0510 18:03:30.467485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:32.470465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:32.474727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:34.477644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:34.485969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:36.489267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:36.493856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:38.497064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:38.502047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:40.504583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:40.510043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:42.513109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:42.517727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:44.520862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:44.528896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:46.531679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:46.541054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:48.544384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:48.552654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:50.555336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:50.560200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:52.564091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:52.572885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:54.575952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:03:54.594675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-691821 -n functional-691821
helpers_test.go:261: (dbg) Run:  kubectl --context functional-691821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7: exit status 1 (82.047727ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:51 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://e817147458eb6d601ada96e9a5974fccfc28da791994115c324e3151ff0c4bae
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 May 2025 17:58:53 +0000
	      Finished:     Sat, 10 May 2025 17:58:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x55zh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-x55zh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m4s  default-scheduler  Successfully assigned default/busybox-mount to functional-691821
	  Normal  Pulling    5m4s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m2s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.245s (2.245s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m2s  kubelet            Created container: mount-munger
	  Normal  Started    5m2s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-zxjxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wxvj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2wxvj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m20s                  default-scheduler  Successfully assigned default/mysql-58ccfd96bb-zxjxx to functional-691821
	  Warning  Failed     3m45s (x2 over 5m16s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m24s (x5 over 5m19s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m22s (x5 over 5m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m22s (x3 over 4m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x20 over 5m16s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4s (x20 over 5m16s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:44 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hbdmw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hbdmw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m11s                  default-scheduler  Successfully assigned default/sp-pod to functional-691821
	  Normal   Pulling    2m11s (x5 over 5m11s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m9s (x5 over 5m8s)    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m9s (x5 over 5m8s)    kubelet            Error: ErrImagePull
	  Warning  Failed     67s (x15 over 5m8s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    14s (x19 over 5m8s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-qgdvz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-bm8x7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7: exit status 1
E0510 18:08:07.055987 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [105e97a8-42a9-4f95-81da-92e561ff98f9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00637727s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-691821 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-691821 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-691821 get pvc myclaim -o=json
I0510 17:58:42.261555 1172304 retry.go:31] will retry after 1.883533412s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a4ee0c5c-cd90-4f37-94a9-31ae66f48f20 ResourceVersion:774 Generation:0 CreationTimestamp:2025-05-10 17:58:42 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c0a850 VolumeMode:0xc001c0a860 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-691821 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-691821 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [48ed2b59-4537-4681-861b-f3fd1f291679] Pending
helpers_test.go:344: "sp-pod" [48ed2b59-4537-4681-861b-f3fd1f291679] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-691821 -n functional-691821
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-05-10 18:01:44.60052458 +0000 UTC m=+1372.244294628
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-691821 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-691821 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-691821/192.168.39.96
Start Time:       Sat, 10 May 2025 17:58:44 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hbdmw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-hbdmw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-691821
Warning  Failed     89s (x4 over 2m57s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     89s (x4 over 2m57s)   kubelet            Error: ErrImagePull
Normal   BackOff    12s (x10 over 2m57s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     12s (x10 over 2m57s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    0s (x5 over 3m)       kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-691821 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-691821 logs sp-pod -n default: exit status 1 (72.741261ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-691821 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-691821 -n functional-691821
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 logs -n 25: (1.402619584s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-691821 ssh stat                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh sudo                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port4113061102/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh -- ls                                              | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh sudo                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh pgrep                                              | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-691821 image build -t                                         | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | localhost/my-image:functional-691821                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-691821 image ls                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:58:53
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:58:53.248483 1182290 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:58:53.248630 1182290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.248640 1182290 out.go:358] Setting ErrFile to fd 2...
	I0510 17:58:53.248647 1182290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.248827 1182290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:58:53.249383 1182290 out.go:352] Setting JSON to false
	I0510 17:58:53.250412 1182290 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":20477,"bootTime":1746879456,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:58:53.250511 1182290 start.go:140] virtualization: kvm guest
	I0510 17:58:53.252374 1182290 out.go:177] * [functional-691821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:58:53.253636 1182290 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:58:53.253653 1182290 notify.go:220] Checking for updates...
	I0510 17:58:53.256186 1182290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:58:53.257433 1182290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:58:53.258882 1182290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:58:53.259972 1182290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:58:53.261113 1182290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:58:53.262700 1182290 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:58:53.263147 1182290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.263200 1182290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.279049 1182290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0510 17:58:53.279474 1182290 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.279900 1182290 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.279945 1182290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.280307 1182290 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.280513 1182290 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.280799 1182290 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:58:53.281084 1182290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.281120 1182290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.296869 1182290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0510 17:58:53.297413 1182290 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.297937 1182290 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.297959 1182290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.298294 1182290 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.298467 1182290 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.331941 1182290 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 17:58:53.333118 1182290 start.go:304] selected driver: kvm2
	I0510 17:58:53.333135 1182290 start.go:908] validating driver "kvm2" against &{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:53.333262 1182290 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:58:53.334422 1182290 cni.go:84] Creating CNI manager for ""
	I0510 17:58:53.334481 1182290 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:58:53.334534 1182290 start.go:347] cluster config:
	{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:53.335966 1182290 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e817147458eb6       56cc512116c8f       2 minutes ago       Exited              mount-munger              0                   c60694e2ae4a9       busybox-mount
	cf9b4f2bda584       82e4c8a736a4f       3 minutes ago       Running             echoserver                0                   df87dfa2642c3       hello-node-fcfd88b6f-dmfws
	3cf8f0a97cccb       82e4c8a736a4f       3 minutes ago       Running             echoserver                0                   07de351a73ba2       hello-node-connect-58f9cf68d8-s9rfl
	e2d282b25a3fc       6e38f40d628db       3 minutes ago       Running             storage-provisioner       5                   16968ce74e551       storage-provisioner
	8d326984f158b       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       4                   16968ce74e551       storage-provisioner
	64c36ba7cc484       f1184a0bd7fe5       3 minutes ago       Running             kube-proxy                2                   b184287cd85b5       kube-proxy-64t85
	a07d63afac120       1cf5f116067c6       3 minutes ago       Running             coredns                   2                   18e2a2dc46b0b       coredns-674b8bbfcf-9frgq
	2216f7cab388d       6ba9545b2183e       3 minutes ago       Running             kube-apiserver            0                   1cf595b2025dc       kube-apiserver-functional-691821
	050e79a4fc59d       1d579cb6d6967       3 minutes ago       Running             kube-controller-manager   3                   84647453de7f2       kube-controller-manager-functional-691821
	b989fd8fae3bc       8d72586a76469       3 minutes ago       Running             kube-scheduler            2                   ff668932218ea       kube-scheduler-functional-691821
	75c55deac05d6       499038711c081       3 minutes ago       Running             etcd                      2                   b4a45e539be46       etcd-functional-691821
	ae0cf2a6aa1d6       1d579cb6d6967       4 minutes ago       Exited              kube-controller-manager   2                   84647453de7f2       kube-controller-manager-functional-691821
	2c82f924ca0f1       8d72586a76469       4 minutes ago       Exited              kube-scheduler            1                   ff668932218ea       kube-scheduler-functional-691821
	856acec20060e       499038711c081       4 minutes ago       Exited              etcd                      1                   b4a45e539be46       etcd-functional-691821
	c37c65e582838       1cf5f116067c6       4 minutes ago       Exited              coredns                   1                   18e2a2dc46b0b       coredns-674b8bbfcf-9frgq
	21669a32b7e33       f1184a0bd7fe5       4 minutes ago       Exited              kube-proxy                1                   b184287cd85b5       kube-proxy-64t85
	
	
	==> containerd <==
	May 10 18:00:10 functional-691821 containerd[4313]: time="2025-05-10T18:00:10.684169413Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:00:10 functional-691821 containerd[4313]: time="2025-05-10T18:00:10.684234269Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=11973"
	May 10 18:00:12 functional-691821 containerd[4313]: time="2025-05-10T18:00:12.546173649Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	May 10 18:00:12 functional-691821 containerd[4313]: time="2025-05-10T18:00:12.549258813Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:13 functional-691821 containerd[4313]: time="2025-05-10T18:00:13.841798000Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:15 functional-691821 containerd[4313]: time="2025-05-10T18:00:15.494114213Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:00:15 functional-691821 containerd[4313]: time="2025-05-10T18:00:15.494163080Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	May 10 18:00:26 functional-691821 containerd[4313]: time="2025-05-10T18:00:26.547991306Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:00:26 functional-691821 containerd[4313]: time="2025-05-10T18:00:26.550523048Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:27 functional-691821 containerd[4313]: time="2025-05-10T18:00:27.152926558Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:28 functional-691821 containerd[4313]: time="2025-05-10T18:00:28.809718258Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:00:28 functional-691821 containerd[4313]: time="2025-05-10T18:00:28.809782564Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	May 10 18:00:30 functional-691821 containerd[4313]: time="2025-05-10T18:00:30.546899122Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 18:00:30 functional-691821 containerd[4313]: time="2025-05-10T18:00:30.550082076Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:31 functional-691821 containerd[4313]: time="2025-05-10T18:00:31.126187566Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:00:32 functional-691821 containerd[4313]: time="2025-05-10T18:00:32.785822707Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:00:32 functional-691821 containerd[4313]: time="2025-05-10T18:00:32.786367018Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	May 10 18:01:31 functional-691821 containerd[4313]: time="2025-05-10T18:01:31.546615408Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	May 10 18:01:31 functional-691821 containerd[4313]: time="2025-05-10T18:01:31.549497536Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:32 functional-691821 containerd[4313]: time="2025-05-10T18:01:32.151208487Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:33 functional-691821 containerd[4313]: time="2025-05-10T18:01:33.808755574Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	May 10 18:01:33 functional-691821 containerd[4313]: time="2025-05-10T18:01:33.808859759Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:01:44 functional-691821 containerd[4313]: time="2025-05-10T18:01:44.548424304Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	May 10 18:01:44 functional-691821 containerd[4313]: time="2025-05-10T18:01:44.551415753Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:01:45 functional-691821 containerd[4313]: time="2025-05-10T18:01:45.154892540Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [a07d63afac1208e5d7b51664838271e7c178dc2599c84b9861fe7d38718ec2f0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:56700 - 25698 "HINFO IN 1577412054363027221.8669343729915223077. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033419639s
	
	
	==> coredns [c37c65e582838a84367d5743a692b4338f105733d6e1734cf7d38f0aebfeb391] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:53064 - 49758 "HINFO IN 1882376968732920025.5461233709581115618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025064626s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-691821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-691821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-691821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_56_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:55:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-691821
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:01:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:59:11 +0000   Sat, 10 May 2025 17:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    functional-691821
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6b92320f3b244d19d8de03c98e3fa63
	  System UUID:                a6b92320-f3b2-44d1-9d8d-e03c98e3fa63
	  Boot ID:                    66acc6ce-de22-402d-970e-cba38e9f4da1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-s9rfl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-node-fcfd88b6f-dmfws                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     mysql-58ccfd96bb-zxjxx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    3m10s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-674b8bbfcf-9frgq                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m39s
	  kube-system                 etcd-functional-691821                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m44s
	  kube-system                 kube-apiserver-functional-691821              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kube-controller-manager-functional-691821     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-proxy-64t85                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-functional-691821              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-qgdvz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-bm8x7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  Starting                 3m34s                  kube-proxy       
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m51s (x8 over 5m51s)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x8 over 5m51s)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x7 over 5m51s)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m44s                  kubelet          Node functional-691821 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  5m44s                  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s                  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s                  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m41s                  node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	  Normal  NodeHasNoDiskPressure    4m30s (x8 over 4m30s)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m30s (x8 over 4m30s)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m30s (x7 over 4m30s)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s (x8 over 3m39s)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x8 over 3m39s)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x7 over 3m39s)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	
	
	==> dmesg <==
	[  +0.004996] (rpcbind)[142]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.123380] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085397] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117100] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.094887] kauditd_printk_skb: 46 callbacks suppressed
	[May10 17:56] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.988309] kauditd_printk_skb: 19 callbacks suppressed
	[ +30.492392] kauditd_printk_skb: 77 callbacks suppressed
	[  +0.840875] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.078180] kauditd_printk_skb: 13 callbacks suppressed
	[May10 17:57] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.258728] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.802612] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.127631] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.942984] kauditd_printk_skb: 81 callbacks suppressed
	[May10 17:58] kauditd_printk_skb: 10 callbacks suppressed
	[  +4.185734] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.522787] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.211923] kauditd_printk_skb: 33 callbacks suppressed
	[  +1.856047] kauditd_printk_skb: 19 callbacks suppressed
	[  +4.809920] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 25 callbacks suppressed
	[  +2.936875] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [75c55deac05d69f20fa045dc7299aba1671d3f146746315886d4df3296540b21] <==
	{"level":"info","ts":"2025-05-10T17:57:59.812159Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T17:57:59.812578Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"d4b4d4eeb3ae7df8","initial-advertise-peer-urls":["https://192.168.39.96:2380"],"listen-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T17:57:59.812341Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:59.813505Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:59.813384Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T17:58:01.584903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.584970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.585064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgPreVoteResp from d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.585095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgVoteResp from d4b4d4eeb3ae7df8 at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became leader at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4b4d4eeb3ae7df8 elected leader d4b4d4eeb3ae7df8 at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.586844Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"d4b4d4eeb3ae7df8","local-member-attributes":"{Name:functional-691821 ClientURLs:[https://192.168.39.96:2379]}","request-path":"/0/members/d4b4d4eeb3ae7df8/attributes","cluster-id":"f38f0aa72455c2b8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:58:01.586892Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:58:01.587090Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:58:01.587920Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:58:01.588064Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:58:01.588935Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:58:01.588969Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:58:01.589441Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:58:01.590850Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.96:2379"}
	{"level":"warn","ts":"2025-05-10T17:58:54.157352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.93852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:58:54.157432Z","caller":"traceutil/trace.go:171","msg":"trace[919417013] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:845; }","duration":"189.134784ms","start":"2025-05-10T17:58:53.968285Z","end":"2025-05-10T17:58:54.157420Z","steps":["trace[919417013] 'range keys from in-memory index tree'  (duration: 188.861938ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:58:54.157604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.022159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:58:54.157622Z","caller":"traceutil/trace.go:171","msg":"trace[935324799] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:845; }","duration":"148.082091ms","start":"2025-05-10T17:58:54.009534Z","end":"2025-05-10T17:58:54.157616Z","steps":["trace[935324799] 'range keys from in-memory index tree'  (duration: 147.943061ms)"],"step_count":1}
	
	
	==> etcd [856acec20060e53c35110df29d6d2e6f031d2518c6fe6d8566922535e7deaeef] <==
	{"level":"info","ts":"2025-05-10T17:57:05.824420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:57:05.824579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgPreVoteResp from d4b4d4eeb3ae7df8 at term 2"}
	{"level":"info","ts":"2025-05-10T17:57:05.824708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.824830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgVoteResp from d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.824991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.825157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4b4d4eeb3ae7df8 elected leader d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.832239Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"d4b4d4eeb3ae7df8","local-member-attributes":"{Name:functional-691821 ClientURLs:[https://192.168.39.96:2379]}","request-path":"/0/members/d4b4d4eeb3ae7df8/attributes","cluster-id":"f38f0aa72455c2b8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:57:05.832292Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:57:05.832739Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:57:05.832785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:57:05.832329Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:57:05.833673Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:57:05.833841Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:57:05.834554Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:57:05.834573Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.96:2379"}
	{"level":"info","ts":"2025-05-10T17:57:58.744115Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T17:57:58.744232Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-691821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	{"level":"info","ts":"2025-05-10T17:57:58.745876Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4b4d4eeb3ae7df8","current-leader-member-id":"d4b4d4eeb3ae7df8"}
	{"level":"warn","ts":"2025-05-10T17:57:58.745956Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.745986Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.746029Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.746069Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T17:57:58.749474Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:58.749584Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:58.749593Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-691821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	
	
	==> kernel <==
	 18:01:45 up 6 min,  0 user,  load average: 0.42, 0.49, 0.26
	Linux functional-691821 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [2216f7cab388d023cba0dd87098e5d38a47d25ab30366eae1ececa307d730989] <==
	I0510 17:58:09.640700       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0510 17:58:10.414079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 17:58:10.567267       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0510 17:58:10.834368       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.96]
	I0510 17:58:10.835564       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 17:58:10.843158       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 17:58:11.246747       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 17:58:11.283333       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 17:58:11.312191       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 17:58:11.317941       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 17:58:12.834577       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:13.079769       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 17:58:30.131589       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:30.144804       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.188.157"}
	I0510 17:58:33.594676       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:35.152296       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.132.59"}
	I0510 17:58:35.157933       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:38.312353       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:38.322991       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.235.170"}
	I0510 17:58:43.513813       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:43.523104       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.188.137"}
	I0510 17:58:54.528699       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 17:58:54.828881       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.148.186"}
	I0510 17:58:54.832290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:54.864279       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.126.207"}
	
	
	==> kube-controller-manager [050e79a4fc59dac114131acb37aa8a9d9a80425a2779cd3da14fb462c0946fe2] <==
	I0510 17:58:13.106087       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:58:13.106227       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 17:58:13.106357       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 17:58:13.106374       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:58:13.111960       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:58:13.115883       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 17:58:13.125966       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:58:13.138270       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 17:58:13.155184       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:58:13.155214       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:58:13.155636       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:58:13.155809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-691821"
	I0510 17:58:13.155895       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:58:13.156092       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:58:13.541983       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:58:13.553356       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:58:13.553577       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:58:13.553822       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0510 17:58:54.626700       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.651543       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.651688       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.673886       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.674274       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.683624       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.685729       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ae0cf2a6aa1d620418f5d84d42322a622df1f38d99ad5e0e4630991991a563d7] <==
	I0510 17:57:21.220606       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 17:57:21.220836       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0510 17:57:21.224274       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:57:21.226669       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 17:57:21.226976       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 17:57:21.228359       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 17:57:21.228611       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 17:57:21.229897       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 17:57:21.231063       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:57:21.232236       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 17:57:21.273439       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:57:21.321261       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0510 17:57:21.425200       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:57:21.425661       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:57:21.426717       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-691821"
	I0510 17:57:21.426926       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:57:21.453313       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 17:57:21.528653       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:57:21.532160       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:57:21.568787       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:57:21.570183       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:57:21.926438       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:57:21.926468       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:57:21.926474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 17:57:21.953665       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [21669a32b7e33862427a098afcd9bfcdebe112b21a94c5be46fc56fb52098d7f] <==
	E0510 17:56:57.550398       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:57:05.957863       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.96"]
	E0510 17:57:05.958240       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:57:06.026514       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:57:06.026753       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:57:06.026899       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:57:06.053117       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:57:06.056696       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:57:06.056725       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:57:06.064233       1 config.go:199] "Starting service config controller"
	I0510 17:57:06.064270       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:57:06.064300       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:57:06.064304       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:57:06.064332       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:57:06.064350       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:57:06.065114       1 config.go:329] "Starting node config controller"
	I0510 17:57:06.065135       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:57:06.165180       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:57:06.165469       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:57:06.165991       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:57:06.166076       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [64c36ba7cc4847ab2180b08ff83121ec785778882563a18aa707bbb1b259e14f] <==
	E0510 17:58:11.107097       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:58:11.119418       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.96"]
	E0510 17:58:11.119678       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:58:11.171387       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:58:11.171414       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:58:11.171434       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:58:11.182345       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:58:11.182537       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:58:11.182549       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:58:11.183639       1 config.go:199] "Starting service config controller"
	I0510 17:58:11.183654       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:58:11.188921       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:58:11.188930       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:58:11.188944       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:58:11.188947       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:58:11.189192       1 config.go:329] "Starting node config controller"
	I0510 17:58:11.189198       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:58:11.284157       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:58:11.289777       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 17:58:11.289965       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:58:11.294077       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2c82f924ca0f1671f3d7fc7bb20eb550e3f9947834f82c3af76c731e5da0367a] <==
	I0510 17:57:04.671506       1 serving.go:386] Generated self-signed cert in-memory
	I0510 17:57:06.395354       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:57:06.395446       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:57:06.399932       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0510 17:57:06.400202       1 shared_informer.go:350] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0510 17:57:06.400413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.400449       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.400550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:57:06.400585       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:57:06.400912       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:57:06.401186       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:57:06.500481       1 shared_informer.go:357] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0510 17:57:06.500698       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.500850       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E0510 17:57:17.930452       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:57:17.989860       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:57:17.992115       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0510 17:57:58.684806       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0510 17:57:58.684880       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0510 17:57:58.684992       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b989fd8fae3bc05f3b514c7d418b3c6bcae98472c1c94d9d0517ea04b19f21a0] <==
	E0510 17:58:04.093433       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.96:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:58:04.274503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.96:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:58:04.375831       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.96:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:58:04.404870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:58:04.438325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:58:04.457488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.96:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:58:04.560877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.96:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:58:04.726493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:58:04.820850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 17:58:04.969521       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.96:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 17:58:04.978783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.96:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:58:05.095327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.96:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:58:05.151575       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.96:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:58:09.532438       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:58:09.532808       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:58:09.532977       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:58:09.533244       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:58:09.533305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:58:09.533354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:58:09.534120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:58:09.534399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 17:58:09.534435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:58:09.532538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:58:09.544324       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0510 17:58:10.211829       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 18:00:39 functional-691821 kubelet[5093]: E0510 18:00:39.546720    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:00:39 functional-691821 kubelet[5093]: E0510 18:00:39.547305    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:00:40 functional-691821 kubelet[5093]: E0510 18:00:40.546437    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:00:46 functional-691821 kubelet[5093]: E0510 18:00:46.547536    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:00:50 functional-691821 kubelet[5093]: E0510 18:00:50.547345    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:00:50 functional-691821 kubelet[5093]: E0510 18:00:50.547681    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:00:53 functional-691821 kubelet[5093]: E0510 18:00:53.545943    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:00:58 functional-691821 kubelet[5093]: E0510 18:00:58.546898    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:01:02 functional-691821 kubelet[5093]: E0510 18:01:02.546310    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:01:05 functional-691821 kubelet[5093]: E0510 18:01:05.546390    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:01:08 functional-691821 kubelet[5093]: E0510 18:01:08.548326    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:01:12 functional-691821 kubelet[5093]: E0510 18:01:12.546583    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:01:13 functional-691821 kubelet[5093]: E0510 18:01:13.546406    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:01:16 functional-691821 kubelet[5093]: E0510 18:01:16.545918    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:01:21 functional-691821 kubelet[5093]: E0510 18:01:21.545870    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:01:25 functional-691821 kubelet[5093]: E0510 18:01:25.546918    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:01:28 functional-691821 kubelet[5093]: E0510 18:01:28.546950    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:01:32 functional-691821 kubelet[5093]: E0510 18:01:32.545565    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:01:33 functional-691821 kubelet[5093]: E0510 18:01:33.809563    5093 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	May 10 18:01:33 functional-691821 kubelet[5093]: E0510 18:01:33.809849    5093 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	May 10 18:01:33 functional-691821 kubelet[5093]: E0510 18:01:33.810100    5093 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-zxjxx_default(d200ca8f-e5a9-4b51-be98-3a6c36a30ba2): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 18:01:33 functional-691821 kubelet[5093]: E0510 18:01:33.811628    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:01:36 functional-691821 kubelet[5093]: E0510 18:01:36.547453    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:01:43 functional-691821 kubelet[5093]: E0510 18:01:43.546417    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:01:45 functional-691821 kubelet[5093]: E0510 18:01:45.546924    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	
	
	==> storage-provisioner [8d326984f158b0b8759ee8a0dc7c4228ac980139324c7521604b634092b67fb2] <==
	I0510 17:58:10.974678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:58:10.978170       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e2d282b25a3fc112e61c9297ab3312334703edf6d6cc15e2744eb97494496030] <==
	W0510 18:01:21.832176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:23.835245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:23.840744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:25.844477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:25.852758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:27.856737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:27.861782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:29.864835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:29.869925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:31.873546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:31.882110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:33.885671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:33.892895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:35.895899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:35.905751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:37.910778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:37.922385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:39.924905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:39.929619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:41.933627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:41.939374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:43.942684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:43.951087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:45.954609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:45.962920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-691821 -n functional-691821
helpers_test.go:261: (dbg) Run:  kubectl --context functional-691821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7: exit status 1 (77.184866ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:51 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://e817147458eb6d601ada96e9a5974fccfc28da791994115c324e3151ff0c4bae
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 May 2025 17:58:53 +0000
	      Finished:     Sat, 10 May 2025 17:58:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x55zh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-x55zh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m55s  default-scheduler  Successfully assigned default/busybox-mount to functional-691821
	  Normal  Pulling    2m55s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.245s (2.245s including waiting). Image size: 2395207 bytes.
	  Normal  Created    2m53s  kubelet            Created container: mount-munger
	  Normal  Started    2m53s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-zxjxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wxvj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2wxvj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m11s                default-scheduler  Successfully assigned default/mysql-58ccfd96bb-zxjxx to functional-691821
	  Warning  Failed     96s (x2 over 3m7s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    15s (x5 over 3m10s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     13s (x5 over 3m7s)   kubelet            Error: ErrImagePull
	  Warning  Failed     13s (x3 over 2m50s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    1s (x10 over 3m7s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     1s (x10 over 3m7s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:44 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hbdmw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hbdmw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-691821
	  Warning  Failed     91s (x4 over 2m59s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     91s (x4 over 2m59s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x10 over 2m59s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     14s (x10 over 2m59s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2s (x5 over 3m2s)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-qgdvz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-bm8x7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7: exit status 1
E0510 18:03:07.055506 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:03:34.768346 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-691821 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-zxjxx" [d200ca8f-e5a9-4b51-be98-3a6c36a30ba2] Pending
helpers_test.go:344: "mysql-58ccfd96bb-zxjxx" [d200ca8f-e5a9-4b51-be98-3a6c36a30ba2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-691821 -n functional-691821
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-05-10 18:08:35.489863131 +0000 UTC m=+1783.133633174
functional_test.go:1816: (dbg) Run:  kubectl --context functional-691821 describe po mysql-58ccfd96bb-zxjxx -n default
functional_test.go:1816: (dbg) kubectl --context functional-691821 describe po mysql-58ccfd96bb-zxjxx -n default:
Name:             mysql-58ccfd96bb-zxjxx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-691821/192.168.39.96
Start Time:       Sat, 10 May 2025 17:58:35 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wxvj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2wxvj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-58ccfd96bb-zxjxx to functional-691821
Warning  Failed     8m25s (x2 over 9m56s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m4s (x5 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m2s (x5 over 9m56s)    kubelet            Error: ErrImagePull
Warning  Failed     7m2s (x3 over 9m39s)    kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m44s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m30s (x21 over 9m56s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1816: (dbg) Run:  kubectl --context functional-691821 logs mysql-58ccfd96bb-zxjxx -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-691821 logs mysql-58ccfd96bb-zxjxx -n default: exit status 1 (71.469195ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-zxjxx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-691821 logs mysql-58ccfd96bb-zxjxx -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-691821 -n functional-691821
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 logs -n 25: (1.379216877s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-691821 ssh stat                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh sudo                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port4113061102/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh -- ls                                              | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh sudo                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:58 UTC | 10 May 25 17:58 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh findmnt                                            | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-691821                                                     | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-691821 ssh pgrep                                              | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-691821 image build -t                                         | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | localhost/my-image:functional-691821                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-691821 image ls                                               | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-691821                                                        | functional-691821 | jenkins | v1.35.0 | 10 May 25 17:59 UTC | 10 May 25 17:59 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:58:53
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:58:53.248483 1182290 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:58:53.248630 1182290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.248640 1182290 out.go:358] Setting ErrFile to fd 2...
	I0510 17:58:53.248647 1182290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.248827 1182290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:58:53.249383 1182290 out.go:352] Setting JSON to false
	I0510 17:58:53.250412 1182290 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":20477,"bootTime":1746879456,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:58:53.250511 1182290 start.go:140] virtualization: kvm guest
	I0510 17:58:53.252374 1182290 out.go:177] * [functional-691821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:58:53.253636 1182290 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:58:53.253653 1182290 notify.go:220] Checking for updates...
	I0510 17:58:53.256186 1182290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:58:53.257433 1182290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:58:53.258882 1182290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:58:53.259972 1182290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:58:53.261113 1182290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:58:53.262700 1182290 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:58:53.263147 1182290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.263200 1182290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.279049 1182290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0510 17:58:53.279474 1182290 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.279900 1182290 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.279945 1182290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.280307 1182290 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.280513 1182290 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.280799 1182290 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:58:53.281084 1182290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.281120 1182290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.296869 1182290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0510 17:58:53.297413 1182290 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.297937 1182290 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.297959 1182290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.298294 1182290 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.298467 1182290 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.331941 1182290 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 17:58:53.333118 1182290 start.go:304] selected driver: kvm2
	I0510 17:58:53.333135 1182290 start.go:908] validating driver "kvm2" against &{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:53.333262 1182290 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:58:53.334422 1182290 cni.go:84] Creating CNI manager for ""
	I0510 17:58:53.334481 1182290 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:58:53.334534 1182290 start.go:347] cluster config:
	{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:53.335966 1182290 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e817147458eb6       56cc512116c8f       9 minutes ago       Exited              mount-munger              0                   c60694e2ae4a9       busybox-mount
	cf9b4f2bda584       82e4c8a736a4f       9 minutes ago       Running             echoserver                0                   df87dfa2642c3       hello-node-fcfd88b6f-dmfws
	3cf8f0a97cccb       82e4c8a736a4f       9 minutes ago       Running             echoserver                0                   07de351a73ba2       hello-node-connect-58f9cf68d8-s9rfl
	e2d282b25a3fc       6e38f40d628db       10 minutes ago      Running             storage-provisioner       5                   16968ce74e551       storage-provisioner
	8d326984f158b       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       4                   16968ce74e551       storage-provisioner
	64c36ba7cc484       f1184a0bd7fe5       10 minutes ago      Running             kube-proxy                2                   b184287cd85b5       kube-proxy-64t85
	a07d63afac120       1cf5f116067c6       10 minutes ago      Running             coredns                   2                   18e2a2dc46b0b       coredns-674b8bbfcf-9frgq
	2216f7cab388d       6ba9545b2183e       10 minutes ago      Running             kube-apiserver            0                   1cf595b2025dc       kube-apiserver-functional-691821
	050e79a4fc59d       1d579cb6d6967       10 minutes ago      Running             kube-controller-manager   3                   84647453de7f2       kube-controller-manager-functional-691821
	b989fd8fae3bc       8d72586a76469       10 minutes ago      Running             kube-scheduler            2                   ff668932218ea       kube-scheduler-functional-691821
	75c55deac05d6       499038711c081       10 minutes ago      Running             etcd                      2                   b4a45e539be46       etcd-functional-691821
	ae0cf2a6aa1d6       1d579cb6d6967       11 minutes ago      Exited              kube-controller-manager   2                   84647453de7f2       kube-controller-manager-functional-691821
	2c82f924ca0f1       8d72586a76469       11 minutes ago      Exited              kube-scheduler            1                   ff668932218ea       kube-scheduler-functional-691821
	856acec20060e       499038711c081       11 minutes ago      Exited              etcd                      1                   b4a45e539be46       etcd-functional-691821
	c37c65e582838       1cf5f116067c6       11 minutes ago      Exited              coredns                   1                   18e2a2dc46b0b       coredns-674b8bbfcf-9frgq
	21669a32b7e33       f1184a0bd7fe5       11 minutes ago      Exited              kube-proxy                1                   b184287cd85b5       kube-proxy-64t85
	
	
	==> containerd <==
	May 10 18:02:00 functional-691821 containerd[4313]: time="2025-05-10T18:02:00.803411478Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 18:02:00 functional-691821 containerd[4313]: time="2025-05-10T18:02:00.805790262Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:02:01 functional-691821 containerd[4313]: time="2025-05-10T18:02:01.406258787Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:02:03 functional-691821 containerd[4313]: time="2025-05-10T18:02:03.052332072Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:02:03 functional-691821 containerd[4313]: time="2025-05-10T18:02:03.052457819Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	May 10 18:04:16 functional-691821 containerd[4313]: time="2025-05-10T18:04:16.546608764Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	May 10 18:04:16 functional-691821 containerd[4313]: time="2025-05-10T18:04:16.549316692Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:17 functional-691821 containerd[4313]: time="2025-05-10T18:04:17.151703292Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:19 functional-691821 containerd[4313]: time="2025-05-10T18:04:19.187760214Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:04:19 functional-691821 containerd[4313]: time="2025-05-10T18:04:19.187869427Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=11972"
	May 10 18:04:39 functional-691821 containerd[4313]: time="2025-05-10T18:04:39.545630816Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	May 10 18:04:39 functional-691821 containerd[4313]: time="2025-05-10T18:04:39.549088694Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:40 functional-691821 containerd[4313]: time="2025-05-10T18:04:40.143699584Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:42 functional-691821 containerd[4313]: time="2025-05-10T18:04:42.174139699Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:04:42 functional-691821 containerd[4313]: time="2025-05-10T18:04:42.174294837Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21215"
	May 10 18:04:49 functional-691821 containerd[4313]: time="2025-05-10T18:04:49.546982232Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:04:49 functional-691821 containerd[4313]: time="2025-05-10T18:04:49.549865995Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:50 functional-691821 containerd[4313]: time="2025-05-10T18:04:50.130743364Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:51 functional-691821 containerd[4313]: time="2025-05-10T18:04:51.795612810Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:04:51 functional-691821 containerd[4313]: time="2025-05-10T18:04:51.795699096Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	May 10 18:04:52 functional-691821 containerd[4313]: time="2025-05-10T18:04:52.546879744Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 18:04:52 functional-691821 containerd[4313]: time="2025-05-10T18:04:52.550220232Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:53 functional-691821 containerd[4313]: time="2025-05-10T18:04:53.142911479Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	May 10 18:04:54 functional-691821 containerd[4313]: time="2025-05-10T18:04:54.802377967Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:04:54 functional-691821 containerd[4313]: time="2025-05-10T18:04:54.802527419Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [a07d63afac1208e5d7b51664838271e7c178dc2599c84b9861fe7d38718ec2f0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:56700 - 25698 "HINFO IN 1577412054363027221.8669343729915223077. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033419639s
	
	
	==> coredns [c37c65e582838a84367d5743a692b4338f105733d6e1734cf7d38f0aebfeb391] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:53064 - 49758 "HINFO IN 1882376968732920025.5461233709581115618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025064626s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-691821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-691821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-691821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_56_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:55:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-691821
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:08:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 18:04:19 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 18:04:19 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 18:04:19 +0000   Sat, 10 May 2025 17:55:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 18:04:19 +0000   Sat, 10 May 2025 17:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    functional-691821
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912740Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6b92320f3b244d19d8de03c98e3fa63
	  System UUID:                a6b92320-f3b2-44d1-9d8d-e03c98e3fa63
	  Boot ID:                    66acc6ce-de22-402d-970e-cba38e9f4da1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2024.11.2
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-s9rfl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-fcfd88b6f-dmfws                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  default                     mysql-58ccfd96bb-zxjxx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-674b8bbfcf-9frgq                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-691821                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-691821              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-691821     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-64t85                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-691821              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-qgdvz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-bm8x7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-691821 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-691821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-691821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-691821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-691821 event: Registered Node functional-691821 in Controller
	
	
	==> dmesg <==
	[  +0.004996] (rpcbind)[142]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.123380] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085397] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117100] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.094887] kauditd_printk_skb: 46 callbacks suppressed
	[May10 17:56] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.988309] kauditd_printk_skb: 19 callbacks suppressed
	[ +30.492392] kauditd_printk_skb: 77 callbacks suppressed
	[  +0.840875] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.078180] kauditd_printk_skb: 13 callbacks suppressed
	[May10 17:57] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.258728] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.802612] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.127631] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.942984] kauditd_printk_skb: 81 callbacks suppressed
	[May10 17:58] kauditd_printk_skb: 10 callbacks suppressed
	[  +4.185734] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.522787] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.211923] kauditd_printk_skb: 33 callbacks suppressed
	[  +1.856047] kauditd_printk_skb: 19 callbacks suppressed
	[  +4.809920] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 25 callbacks suppressed
	[  +2.936875] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [75c55deac05d69f20fa045dc7299aba1671d3f146746315886d4df3296540b21] <==
	{"level":"info","ts":"2025-05-10T17:57:59.813505Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:59.813384Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T17:58:01.584903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.584970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.585064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgPreVoteResp from d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:58:01.585095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgVoteResp from d4b4d4eeb3ae7df8 at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became leader at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.585210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4b4d4eeb3ae7df8 elected leader d4b4d4eeb3ae7df8 at term 4"}
	{"level":"info","ts":"2025-05-10T17:58:01.586844Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"d4b4d4eeb3ae7df8","local-member-attributes":"{Name:functional-691821 ClientURLs:[https://192.168.39.96:2379]}","request-path":"/0/members/d4b4d4eeb3ae7df8/attributes","cluster-id":"f38f0aa72455c2b8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:58:01.586892Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:58:01.587090Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:58:01.587920Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:58:01.588064Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:58:01.588935Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:58:01.588969Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:58:01.589441Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:58:01.590850Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.96:2379"}
	{"level":"warn","ts":"2025-05-10T17:58:54.157352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.93852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:58:54.157432Z","caller":"traceutil/trace.go:171","msg":"trace[919417013] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:845; }","duration":"189.134784ms","start":"2025-05-10T17:58:53.968285Z","end":"2025-05-10T17:58:54.157420Z","steps":["trace[919417013] 'range keys from in-memory index tree'  (duration: 188.861938ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T17:58:54.157604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.022159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T17:58:54.157622Z","caller":"traceutil/trace.go:171","msg":"trace[935324799] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:845; }","duration":"148.082091ms","start":"2025-05-10T17:58:54.009534Z","end":"2025-05-10T17:58:54.157616Z","steps":["trace[935324799] 'range keys from in-memory index tree'  (duration: 147.943061ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T18:08:08.286294Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1324}
	{"level":"info","ts":"2025-05-10T18:08:08.311842Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1324,"took":"25.059882ms","hash":126786555,"current-db-size-bytes":4575232,"current-db-size":"4.6 MB","current-db-size-in-use-bytes":2060288,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-05-10T18:08:08.312097Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":126786555,"revision":1324,"compact-revision":-1}
	
	
	==> etcd [856acec20060e53c35110df29d6d2e6f031d2518c6fe6d8566922535e7deaeef] <==
	{"level":"info","ts":"2025-05-10T17:57:05.824420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:57:05.824579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgPreVoteResp from d4b4d4eeb3ae7df8 at term 2"}
	{"level":"info","ts":"2025-05-10T17:57:05.824708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.824830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 received MsgVoteResp from d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.824991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4b4d4eeb3ae7df8 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.825157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4b4d4eeb3ae7df8 elected leader d4b4d4eeb3ae7df8 at term 3"}
	{"level":"info","ts":"2025-05-10T17:57:05.832239Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"d4b4d4eeb3ae7df8","local-member-attributes":"{Name:functional-691821 ClientURLs:[https://192.168.39.96:2379]}","request-path":"/0/members/d4b4d4eeb3ae7df8/attributes","cluster-id":"f38f0aa72455c2b8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:57:05.832292Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:57:05.832739Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:57:05.832785Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:57:05.832329Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:57:05.833673Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:57:05.833841Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:57:05.834554Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:57:05.834573Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.96:2379"}
	{"level":"info","ts":"2025-05-10T17:57:58.744115Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T17:57:58.744232Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-691821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	{"level":"info","ts":"2025-05-10T17:57:58.745876Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4b4d4eeb3ae7df8","current-leader-member-id":"d4b4d4eeb3ae7df8"}
	{"level":"warn","ts":"2025-05-10T17:57:58.745956Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.745986Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.746029Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:57:58.746069Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T17:57:58.749474Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:58.749584Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.39.96:2380"}
	{"level":"info","ts":"2025-05-10T17:57:58.749593Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-691821","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.96:2380"],"advertise-client-urls":["https://192.168.39.96:2379"]}
	
	
	==> kernel <==
	 18:08:36 up 13 min,  0 user,  load average: 0.47, 0.32, 0.25
	Linux functional-691821 5.10.207 #1 SMP Fri May 9 03:49:24 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2024.11.2"
	
	
	==> kube-apiserver [2216f7cab388d023cba0dd87098e5d38a47d25ab30366eae1ececa307d730989] <==
	I0510 17:58:10.414079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 17:58:10.567267       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0510 17:58:10.834368       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.96]
	I0510 17:58:10.835564       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 17:58:10.843158       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 17:58:11.246747       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 17:58:11.283333       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 17:58:11.312191       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 17:58:11.317941       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 17:58:12.834577       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:13.079769       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 17:58:30.131589       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:30.144804       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.188.157"}
	I0510 17:58:33.594676       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:35.152296       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.132.59"}
	I0510 17:58:35.157933       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:38.312353       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:38.322991       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.235.170"}
	I0510 17:58:43.513813       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:43.523104       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.188.137"}
	I0510 17:58:54.528699       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 17:58:54.828881       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.148.186"}
	I0510 17:58:54.832290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:58:54.864279       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.126.207"}
	I0510 18:08:09.547526       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [050e79a4fc59dac114131acb37aa8a9d9a80425a2779cd3da14fb462c0946fe2] <==
	I0510 17:58:13.106087       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:58:13.106227       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0510 17:58:13.106357       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 17:58:13.106374       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:58:13.111960       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:58:13.115883       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 17:58:13.125966       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:58:13.138270       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 17:58:13.155184       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:58:13.155214       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:58:13.155636       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:58:13.155809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-691821"
	I0510 17:58:13.155895       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:58:13.156092       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:58:13.541983       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:58:13.553356       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:58:13.553577       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:58:13.553822       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0510 17:58:54.626700       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.651543       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.651688       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.673886       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.674274       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.683624       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:58:54.685729       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ae0cf2a6aa1d620418f5d84d42322a622df1f38d99ad5e0e4630991991a563d7] <==
	I0510 17:57:21.220606       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 17:57:21.220836       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0510 17:57:21.224274       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:57:21.226669       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 17:57:21.226976       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 17:57:21.228359       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 17:57:21.228611       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 17:57:21.229897       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 17:57:21.231063       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:57:21.232236       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0510 17:57:21.273439       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:57:21.321261       1 shared_informer.go:357] "Caches are synced" controller="cronjob"
	I0510 17:57:21.425200       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:57:21.425661       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:57:21.426717       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-691821"
	I0510 17:57:21.426926       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:57:21.453313       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 17:57:21.528653       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:57:21.532160       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:57:21.568787       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:57:21.570183       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:57:21.926438       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:57:21.926468       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:57:21.926474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0510 17:57:21.953665       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [21669a32b7e33862427a098afcd9bfcdebe112b21a94c5be46fc56fb52098d7f] <==
	E0510 17:56:57.550398       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:57:05.957863       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.96"]
	E0510 17:57:05.958240       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:57:06.026514       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:57:06.026753       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:57:06.026899       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:57:06.053117       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:57:06.056696       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:57:06.056725       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:57:06.064233       1 config.go:199] "Starting service config controller"
	I0510 17:57:06.064270       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:57:06.064300       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:57:06.064304       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:57:06.064332       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:57:06.064350       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:57:06.065114       1 config.go:329] "Starting node config controller"
	I0510 17:57:06.065135       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:57:06.165180       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:57:06.165469       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:57:06.165991       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:57:06.166076       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [64c36ba7cc4847ab2180b08ff83121ec785778882563a18aa707bbb1b259e14f] <==
	E0510 17:58:11.107097       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0510 17:58:11.119418       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.39.96"]
	E0510 17:58:11.119678       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:58:11.171387       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0510 17:58:11.171414       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0510 17:58:11.171434       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:58:11.182345       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:58:11.182537       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:58:11.182549       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:58:11.183639       1 config.go:199] "Starting service config controller"
	I0510 17:58:11.183654       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:58:11.188921       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:58:11.188930       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:58:11.188944       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:58:11.188947       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:58:11.189192       1 config.go:329] "Starting node config controller"
	I0510 17:58:11.189198       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:58:11.284157       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:58:11.289777       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 17:58:11.289965       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:58:11.294077       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2c82f924ca0f1671f3d7fc7bb20eb550e3f9947834f82c3af76c731e5da0367a] <==
	I0510 17:57:04.671506       1 serving.go:386] Generated self-signed cert in-memory
	I0510 17:57:06.395354       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:57:06.395446       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:57:06.399932       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0510 17:57:06.400202       1 shared_informer.go:350] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0510 17:57:06.400413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.400449       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.400550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:57:06.400585       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:57:06.400912       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:57:06.401186       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:57:06.500481       1 shared_informer.go:357] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0510 17:57:06.500698       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:57:06.500850       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E0510 17:57:17.930452       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:57:17.989860       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:57:17.992115       1 reflector.go:200] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0510 17:57:58.684806       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0510 17:57:58.684880       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0510 17:57:58.684992       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b989fd8fae3bc05f3b514c7d418b3c6bcae98472c1c94d9d0517ea04b19f21a0] <==
	E0510 17:58:04.093433       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.96:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 17:58:04.274503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.96:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:58:04.375831       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.96:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:58:04.404870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:58:04.438325       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:58:04.457488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.96:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:58:04.560877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.96:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:58:04.726493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:58:04.820850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.96:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 17:58:04.969521       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.96:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 17:58:04.978783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.96:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 17:58:05.095327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.96:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 17:58:05.151575       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.96:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.96:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:58:09.532438       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 17:58:09.532808       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 17:58:09.532977       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 17:58:09.533244       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0510 17:58:09.533305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 17:58:09.533354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 17:58:09.534120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 17:58:09.534399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 17:58:09.534435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 17:58:09.532538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 17:58:09.544324       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0510 17:58:10.211829       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 18:07:17 functional-691821 kubelet[5093]: E0510 18:07:17.546801    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:07:21 functional-691821 kubelet[5093]: E0510 18:07:21.545820    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:07:29 functional-691821 kubelet[5093]: E0510 18:07:29.546405    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:07:29 functional-691821 kubelet[5093]: E0510 18:07:29.546798    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:07:30 functional-691821 kubelet[5093]: E0510 18:07:30.545861    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:07:35 functional-691821 kubelet[5093]: E0510 18:07:35.545851    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:07:40 functional-691821 kubelet[5093]: E0510 18:07:40.548723    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:07:41 functional-691821 kubelet[5093]: E0510 18:07:41.545998    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:07:44 functional-691821 kubelet[5093]: E0510 18:07:44.547171    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:07:48 functional-691821 kubelet[5093]: E0510 18:07:48.547705    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:07:54 functional-691821 kubelet[5093]: E0510 18:07:54.545766    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:07:55 functional-691821 kubelet[5093]: E0510 18:07:55.552125    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:07:57 functional-691821 kubelet[5093]: E0510 18:07:57.546567    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:08:00 functional-691821 kubelet[5093]: E0510 18:08:00.547290    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:08:07 functional-691821 kubelet[5093]: E0510 18:08:07.546144    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:08:09 functional-691821 kubelet[5093]: E0510 18:08:09.546543    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:08:09 functional-691821 kubelet[5093]: E0510 18:08:09.546975    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:08:12 functional-691821 kubelet[5093]: E0510 18:08:12.546601    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:08:19 functional-691821 kubelet[5093]: E0510 18:08:19.545780    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:08:21 functional-691821 kubelet[5093]: E0510 18:08:21.545775    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	May 10 18:08:21 functional-691821 kubelet[5093]: E0510 18:08:21.546925    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:08:23 functional-691821 kubelet[5093]: E0510 18:08:23.545855    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-qgdvz" podUID="05320e38-6043-4947-b16b-467d
33cff404"
	May 10 18:08:32 functional-691821 kubelet[5093]: E0510 18:08:32.545505    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="48ed2b59-4537-4681-861b-f3fd1f291679"
	May 10 18:08:34 functional-691821 kubelet[5093]: E0510 18:08:34.545892    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-zxjxx" podUID="d200ca8f-e5a9-4b51-be98-3a6c36a30ba2"
	May 10 18:08:34 functional-691821 kubelet[5093]: E0510 18:08:34.546248    5093 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-bm8x7" podUID="99e5bc79-2d2f-4f70-9eea-e8b4e9296627"
	
	
	==> storage-provisioner [8d326984f158b0b8759ee8a0dc7c4228ac980139324c7521604b634092b67fb2] <==
	I0510 17:58:10.974678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:58:10.978170       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e2d282b25a3fc112e61c9297ab3312334703edf6d6cc15e2744eb97494496030] <==
	W0510 18:08:11.836351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:13.839838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:13.848531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:15.851927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:15.856394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:17.859079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:17.864059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:19.868219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:19.876629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:21.879975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:21.885263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:23.888558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:23.893470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:25.896832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:25.901128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:27.903981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:27.912732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:29.916715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:29.921904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:31.925263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:31.930184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:33.932776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:33.937495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:35.941390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:08:35.946708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-691821 -n functional-691821
helpers_test.go:261: (dbg) Run:  kubectl --context functional-691821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7: exit status 1 (77.389894ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:51 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://e817147458eb6d601ada96e9a5974fccfc28da791994115c324e3151ff0c4bae
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 May 2025 17:58:53 +0000
	      Finished:     Sat, 10 May 2025 17:58:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x55zh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-x55zh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m46s  default-scheduler  Successfully assigned default/busybox-mount to functional-691821
	  Normal  Pulling    9m46s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m44s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.245s (2.245s including waiting). Image size: 2395207 bytes.
	  Normal  Created    9m44s  kubelet            Created container: mount-munger
	  Normal  Started    9m44s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-zxjxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wxvj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2wxvj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-58ccfd96bb-zxjxx to functional-691821
	  Warning  Failed     8m27s (x2 over 9m58s)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m4s (x3 over 9m41s)    kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m46s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m32s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-691821/192.168.39.96
	Start Time:       Sat, 10 May 2025 17:58:44 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hbdmw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hbdmw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m53s                   default-scheduler  Successfully assigned default/sp-pod to functional-691821
	  Normal   Pulling    6m53s (x5 over 9m53s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m51s (x5 over 9m50s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:c15da6c91de8d2f436196f3a768483ad32c258ed4e1beb3d367a27ed67253e66: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m51s (x5 over 9m50s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m41s (x20 over 9m50s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m26s (x21 over 9m50s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-qgdvz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-bm8x7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-691821 describe pod busybox-mount mysql-58ccfd96bb-zxjxx sp-pod dashboard-metrics-scraper-5d59dccf9b-qgdvz kubernetes-dashboard-7779f9b69b-bm8x7: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.67s)

                                                
                                    

Test pass (285/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.41
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.33.0/json-events 12.21
13 TestDownloadOnly/v1.33.0/preload-exists 0
17 TestDownloadOnly/v1.33.0/LogsDuration 0.06
18 TestDownloadOnly/v1.33.0/DeleteAll 0.15
19 TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 117.82
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 216.37
29 TestAddons/serial/Volcano 41.91
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 16.91
37 TestAddons/parallel/InspektorGadget 10.72
38 TestAddons/parallel/MetricsServer 5.77
40 TestAddons/parallel/CSI 50.85
41 TestAddons/parallel/Headlamp 17.91
42 TestAddons/parallel/CloudSpanner 5.71
44 TestAddons/parallel/NvidiaDevicePlugin 6.58
45 TestAddons/parallel/Yakd 12.2
47 TestAddons/StoppedEnableDisable 91.29
48 TestCertOptions 54.16
49 TestCertExpiration 304.88
51 TestForceSystemdFlag 75.88
52 TestForceSystemdEnv 49.36
54 TestKVMDriverInstallOrUpdate 5.16
58 TestErrorSpam/setup 44.54
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.79
62 TestErrorSpam/unpause 1.86
63 TestErrorSpam/stop 4.78
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 89.3
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 49.87
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
75 TestFunctional/serial/CacheCmd/cache/add_local 1.97
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 44.02
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.35
86 TestFunctional/serial/LogsFileCmd 1.4
87 TestFunctional/serial/InvalidService 4.76
89 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.76
97 TestFunctional/parallel/ServiceCmdConnect 11.47
98 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/SSHCmd 0.43
102 TestFunctional/parallel/CpCmd 1.33
104 TestFunctional/parallel/FileSync 0.23
105 TestFunctional/parallel/CertSync 1.34
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
113 TestFunctional/parallel/License 0.56
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
118 TestFunctional/parallel/Version/components 0.48
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.07
121 TestFunctional/parallel/ImageCommands/Setup 1.7
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
136 TestFunctional/parallel/ProfileCmd/profile_list 0.34
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.3
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.18
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
144 TestFunctional/parallel/ServiceCmd/DeployApp 7.16
145 TestFunctional/parallel/MountCmd/any-port 7.23
146 TestFunctional/parallel/ServiceCmd/List 0.46
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
149 TestFunctional/parallel/ServiceCmd/Format 0.29
150 TestFunctional/parallel/ServiceCmd/URL 0.29
151 TestFunctional/parallel/MountCmd/specific-port 1.89
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 207.5
161 TestMultiControlPlane/serial/DeployApp 6.49
162 TestMultiControlPlane/serial/PingHostFromPods 1.17
163 TestMultiControlPlane/serial/AddWorkerNode 51.99
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
166 TestMultiControlPlane/serial/CopyFile 13.32
167 TestMultiControlPlane/serial/StopSecondaryNode 91.67
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
169 TestMultiControlPlane/serial/RestartSecondaryNode 24.02
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.1
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 408.7
172 TestMultiControlPlane/serial/DeleteSecondaryNode 7.07
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
174 TestMultiControlPlane/serial/StopCluster 272.99
175 TestMultiControlPlane/serial/RestartCluster 101.56
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
177 TestMultiControlPlane/serial/AddSecondaryNode 78.99
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
182 TestJSONOutput/start/Command 87.75
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.75
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.65
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.58
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 93.36
214 TestMountStart/serial/StartWithMountFirst 28.29
215 TestMountStart/serial/VerifyMountFirst 0.39
216 TestMountStart/serial/StartWithMountSecond 29.02
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.7
219 TestMountStart/serial/VerifyMountPostDelete 0.39
220 TestMountStart/serial/Stop 1.29
221 TestMountStart/serial/RestartStopped 24.05
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 114.04
226 TestMultiNode/serial/DeployApp2Nodes 5.06
227 TestMultiNode/serial/PingHostFrom2Pods 0.76
228 TestMultiNode/serial/AddNode 50.24
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.59
231 TestMultiNode/serial/CopyFile 7.42
232 TestMultiNode/serial/StopNode 2.25
233 TestMultiNode/serial/StartAfterStop 33.35
234 TestMultiNode/serial/RestartKeepsNodes 313.63
235 TestMultiNode/serial/DeleteNode 2.22
236 TestMultiNode/serial/StopMultiNode 182.11
237 TestMultiNode/serial/RestartMultiNode 86.81
238 TestMultiNode/serial/ValidateNameConflict 46.66
243 TestPreload 271.51
245 TestScheduledStopUnix 117.52
249 TestRunningBinaryUpgrade 207.75
251 TestKubernetesUpgrade 214.99
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 98.8
263 TestNetworkPlugins/group/false 4.12
274 TestNoKubernetes/serial/StartWithStopK8s 54.46
276 TestPause/serial/Start 99.25
277 TestNoKubernetes/serial/Start 29.69
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
279 TestNoKubernetes/serial/ProfileList 18.99
280 TestNoKubernetes/serial/Stop 1.65
281 TestNoKubernetes/serial/StartNoArgs 25.79
282 TestPause/serial/SecondStartNoReconfiguration 83.09
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
284 TestStoppedBinaryUpgrade/Setup 2.28
285 TestStoppedBinaryUpgrade/Upgrade 118.73
286 TestPause/serial/Pause 0.8
287 TestPause/serial/VerifyStatus 0.28
288 TestPause/serial/Unpause 0.74
289 TestPause/serial/PauseAgain 0.91
290 TestPause/serial/DeletePaused 0.89
291 TestPause/serial/VerifyDeletedResources 1
292 TestNetworkPlugins/group/auto/Start 117.46
293 TestNetworkPlugins/group/kindnet/Start 86.33
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
295 TestNetworkPlugins/group/enable-default-cni/Start 59.66
296 TestNetworkPlugins/group/auto/KubeletFlags 0.24
297 TestNetworkPlugins/group/auto/NetCatPod 9.23
298 TestNetworkPlugins/group/auto/DNS 0.17
299 TestNetworkPlugins/group/auto/Localhost 0.17
300 TestNetworkPlugins/group/auto/HairPin 0.16
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/calico/Start 78.92
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
304 TestNetworkPlugins/group/kindnet/NetCatPod 9.49
305 TestNetworkPlugins/group/kindnet/DNS 0.18
306 TestNetworkPlugins/group/kindnet/Localhost 0.13
307 TestNetworkPlugins/group/kindnet/HairPin 0.14
308 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
310 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
311 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
312 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
313 TestNetworkPlugins/group/flannel/Start 83.07
314 TestNetworkPlugins/group/custom-flannel/Start 95.68
315 TestNetworkPlugins/group/bridge/Start 80.32
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.3
318 TestNetworkPlugins/group/calico/NetCatPod 13.36
319 TestNetworkPlugins/group/calico/DNS 0.17
320 TestNetworkPlugins/group/calico/Localhost 0.12
321 TestNetworkPlugins/group/calico/HairPin 0.13
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
324 TestNetworkPlugins/group/flannel/NetCatPod 9.32
326 TestStartStop/group/old-k8s-version/serial/FirstStart 148.71
327 TestNetworkPlugins/group/flannel/DNS 0.16
328 TestNetworkPlugins/group/flannel/Localhost 0.13
329 TestNetworkPlugins/group/flannel/HairPin 0.15
330 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
331 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.25
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
334 TestStartStop/group/no-preload/serial/FirstStart 106.23
335 TestNetworkPlugins/group/bridge/NetCatPod 10.25
336 TestNetworkPlugins/group/custom-flannel/DNS 0.2
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
339 TestNetworkPlugins/group/bridge/DNS 0.14
340 TestNetworkPlugins/group/bridge/Localhost 0.25
341 TestNetworkPlugins/group/bridge/HairPin 0.15
343 TestStartStop/group/embed-certs/serial/FirstStart 100.48
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 112.89
346 TestStartStop/group/no-preload/serial/DeployApp 9.29
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
348 TestStartStop/group/no-preload/serial/Stop 90.83
349 TestStartStop/group/embed-certs/serial/DeployApp 10.27
350 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
352 TestStartStop/group/embed-certs/serial/Stop 91.16
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.98
354 TestStartStop/group/old-k8s-version/serial/Stop 91.64
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.33
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
359 TestStartStop/group/no-preload/serial/SecondStart 44.14
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
361 TestStartStop/group/embed-certs/serial/SecondStart 45.15
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
363 TestStartStop/group/old-k8s-version/serial/SecondStart 149.72
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 73.43
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
369 TestStartStop/group/no-preload/serial/Pause 2.99
371 TestStartStop/group/newest-cni/serial/FirstStart 72.42
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
375 TestStartStop/group/embed-certs/serial/Pause 2.59
376 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
378 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
379 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.82
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
382 TestStartStop/group/newest-cni/serial/Stop 2.32
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
384 TestStartStop/group/newest-cni/serial/SecondStart 33.57
385 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
389 TestStartStop/group/newest-cni/serial/Pause 2.5
390 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
392 TestStartStop/group/old-k8s-version/serial/Pause 2.58
x
+
TestDownloadOnly/v1.20.0/json-events (24.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-685238 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-685238 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (24.411215734s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0510 17:39:16.808533 1172304 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0510 17:39:16.808706 1172304 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-685238
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-685238: exit status 85 (67.162011ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-685238 | jenkins | v1.35.0 | 10 May 25 17:38 UTC |          |
	|         | -p download-only-685238        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:38:52
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:38:52.440261 1172317 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:38:52.440532 1172317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:38:52.440543 1172317 out.go:358] Setting ErrFile to fd 2...
	I0510 17:38:52.440547 1172317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:38:52.440778 1172317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	W0510 17:38:52.440950 1172317 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20720-1165049/.minikube/config/config.json: open /home/jenkins/minikube-integration/20720-1165049/.minikube/config/config.json: no such file or directory
	I0510 17:38:52.441575 1172317 out.go:352] Setting JSON to true
	I0510 17:38:52.442560 1172317 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":19276,"bootTime":1746879456,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:38:52.442673 1172317 start.go:140] virtualization: kvm guest
	I0510 17:38:52.444822 1172317 out.go:97] [download-only-685238] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:38:52.444998 1172317 notify.go:220] Checking for updates...
	W0510 17:38:52.444992 1172317 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball: no such file or directory
	I0510 17:38:52.446099 1172317 out.go:169] MINIKUBE_LOCATION=20720
	I0510 17:38:52.447266 1172317 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:38:52.448623 1172317 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:38:52.449723 1172317 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:38:52.450816 1172317 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0510 17:38:52.452947 1172317 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0510 17:38:52.453229 1172317 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:38:52.486177 1172317 out.go:97] Using the kvm2 driver based on user configuration
	I0510 17:38:52.486227 1172317 start.go:304] selected driver: kvm2
	I0510 17:38:52.486240 1172317 start.go:908] validating driver "kvm2" against <nil>
	I0510 17:38:52.486696 1172317 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:38:52.486799 1172317 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-1165049/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 17:38:52.502761 1172317 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 17:38:52.502834 1172317 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 17:38:52.503424 1172317 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0510 17:38:52.503591 1172317 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 17:38:52.503623 1172317 cni.go:84] Creating CNI manager for ""
	I0510 17:38:52.503686 1172317 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:38:52.503697 1172317 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 17:38:52.503767 1172317 start.go:347] cluster config:
	{Name:download-only-685238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-685238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:38:52.503954 1172317 iso.go:125] acquiring lock: {Name:mkc65d6718a5a236dac4e9cf2d61c7062c63896e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:38:52.505787 1172317 out.go:97] Downloading VM boot image ...
	I0510 17:38:52.505836 1172317 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/iso/amd64/minikube-v1.35.0-1746739450-20720-amd64.iso
	I0510 17:39:03.674721 1172317 out.go:97] Starting "download-only-685238" primary control-plane node in "download-only-685238" cluster
	I0510 17:39:03.674767 1172317 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0510 17:39:03.775022 1172317 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0510 17:39:03.775069 1172317 cache.go:56] Caching tarball of preloaded images
	I0510 17:39:03.775278 1172317 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0510 17:39:03.777238 1172317 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0510 17:39:03.777266 1172317 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0510 17:39:03.877086 1172317 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-685238 host does not exist
	  To start a cluster, run: "minikube start -p download-only-685238"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-685238
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/json-events (12.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-932669 --force --alsologtostderr --kubernetes-version=v1.33.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-932669 --force --alsologtostderr --kubernetes-version=v1.33.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (12.211673651s)
--- PASS: TestDownloadOnly/v1.33.0/json-events (12.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/preload-exists
I0510 17:39:29.370044 1172304 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime containerd
I0510 17:39:29.370117 1172304 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.33.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-932669
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-932669: exit status 85 (64.445483ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-685238 | jenkins | v1.35.0 | 10 May 25 17:38 UTC |                     |
	|         | -p download-only-685238        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| delete  | -p download-only-685238        | download-only-685238 | jenkins | v1.35.0 | 10 May 25 17:39 UTC | 10 May 25 17:39 UTC |
	| start   | -o=json --download-only        | download-only-932669 | jenkins | v1.35.0 | 10 May 25 17:39 UTC |                     |
	|         | -p download-only-932669        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:39:17
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:39:17.201144 1172565 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:39:17.201263 1172565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:39:17.201270 1172565 out.go:358] Setting ErrFile to fd 2...
	I0510 17:39:17.201275 1172565 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:39:17.201452 1172565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:39:17.202019 1172565 out.go:352] Setting JSON to true
	I0510 17:39:17.203140 1172565 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":19301,"bootTime":1746879456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:39:17.203247 1172565 start.go:140] virtualization: kvm guest
	I0510 17:39:17.204985 1172565 out.go:97] [download-only-932669] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:39:17.205163 1172565 notify.go:220] Checking for updates...
	I0510 17:39:17.206267 1172565 out.go:169] MINIKUBE_LOCATION=20720
	I0510 17:39:17.207488 1172565 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:39:17.208706 1172565 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:39:17.210032 1172565 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:39:17.211115 1172565 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0510 17:39:17.213084 1172565 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0510 17:39:17.213322 1172565 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:39:17.245523 1172565 out.go:97] Using the kvm2 driver based on user configuration
	I0510 17:39:17.245565 1172565 start.go:304] selected driver: kvm2
	I0510 17:39:17.245572 1172565 start.go:908] validating driver "kvm2" against <nil>
	I0510 17:39:17.245901 1172565 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:39:17.245986 1172565 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20720-1165049/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0510 17:39:17.261727 1172565 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0510 17:39:17.261788 1172565 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 17:39:17.262377 1172565 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0510 17:39:17.262515 1172565 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 17:39:17.262543 1172565 cni.go:84] Creating CNI manager for ""
	I0510 17:39:17.262595 1172565 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0510 17:39:17.262605 1172565 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0510 17:39:17.262659 1172565 start.go:347] cluster config:
	{Name:download-only-932669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:download-only-932669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:39:17.262753 1172565 iso.go:125] acquiring lock: {Name:mkc65d6718a5a236dac4e9cf2d61c7062c63896e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:39:17.264250 1172565 out.go:97] Starting "download-only-932669" primary control-plane node in "download-only-932669" cluster
	I0510 17:39:17.264285 1172565 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime containerd
	I0510 17:39:17.772451 1172565 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.33.0/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4
	I0510 17:39:17.772492 1172565 cache.go:56] Caching tarball of preloaded images
	I0510 17:39:17.772670 1172565 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime containerd
	I0510 17:39:17.774449 1172565 out.go:97] Downloading Kubernetes v1.33.0 preload ...
	I0510 17:39:17.774477 1172565 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4 ...
	I0510 17:39:17.874727 1172565 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.33.0/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2628ab53df6815cc8810a4d1741060d8 -> /home/jenkins/minikube-integration/20720-1165049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-932669 host does not exist
	  To start a cluster, run: "minikube start -p download-only-932669"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.33.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.33.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-932669
--- PASS: TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0510 17:39:29.977294 1172304 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-772258 --alsologtostderr --binary-mirror http://127.0.0.1:39889 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-772258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-772258
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (117.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-866299 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-866299 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m56.758061333s)
helpers_test.go:175: Cleaning up "offline-containerd-866299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-866299
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-866299: (1.058336186s)
--- PASS: TestOffline (117.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-661496
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-661496: exit status 85 (54.617824ms)

                                                
                                                
-- stdout --
	* Profile "addons-661496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-661496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-661496
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-661496: exit status 85 (55.705495ms)

                                                
                                                
-- stdout --
	* Profile "addons-661496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-661496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (216.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-661496 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-661496 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m36.374606381s)
--- PASS: TestAddons/Setup (216.37s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.91s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 14.664558ms
addons_test.go:807: volcano-scheduler stabilized in 14.864442ms
addons_test.go:815: volcano-admission stabilized in 15.010585ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-79d57559b6-glszg" [4d82f4ab-9a7a-44d5-85cd-a38a76f2b19b] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00915989s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-86dcc58c7c-7cq9m" [0e6c9889-ffaf-4c1e-8b0c-527cd517d005] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.004975898s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-6c8958f79d-jcsdq" [8b19ae8a-e48b-4bf0-a854-1c3736916926] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003423057s
addons_test.go:842: (dbg) Run:  kubectl --context addons-661496 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-661496 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-661496 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [700f48ed-3ed1-4bd7-8c13-fd1f6538c1d2] Pending
helpers_test.go:344: "test-job-nginx-0" [700f48ed-3ed1-4bd7-8c13-fd1f6538c1d2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [700f48ed-3ed1-4bd7-8c13-fd1f6538c1d2] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003609295s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable volcano --alsologtostderr -v=1: (11.47274482s)
--- PASS: TestAddons/serial/Volcano (41.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-661496 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-661496 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-661496 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-661496 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f26f918f-06fa-4ee2-9350-a5763746df7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f26f918f-06fa-4ee2-9350-a5763746df7a] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004051197s
addons_test.go:633: (dbg) Run:  kubectl --context addons-661496 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-661496 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-661496 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.417302ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-694bd45846-zdzh4" [4ba351e4-9daa-43da-8b99-54cf78e8b8d7] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003203504s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8pcc7" [b49e8001-c050-47a2-8471-50c2355d968d] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004404648s
addons_test.go:331: (dbg) Run:  kubectl --context addons-661496 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-661496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-661496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.892630113s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 ip
2025/05/10 17:44:23 [DEBUG] GET http://192.168.39.168:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x4s4s" [c82ac4c2-cbcd-4c5e-9137-5409d57cbcc6] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004282036s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable inspektor-gadget --alsologtostderr -v=1: (5.71179277s)
--- PASS: TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.802924ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-5w57m" [2e1beec0-5626-4c7b-88bc-8260d997758b] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004399469s
addons_test.go:402: (dbg) Run:  kubectl --context addons-661496 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0510 17:44:19.900821 1172304 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0510 17:44:19.904566 1172304 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0510 17:44:19.904591 1172304 kapi.go:107] duration metric: took 3.793447ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.802978ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-661496 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-661496 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b7705ba4-e187-4c68-8516-36023ef8a450] Pending
helpers_test.go:344: "task-pv-pod" [b7705ba4-e187-4c68-8516-36023ef8a450] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b7705ba4-e187-4c68-8516-36023ef8a450] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003634423s
addons_test.go:511: (dbg) Run:  kubectl --context addons-661496 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-661496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-661496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-661496 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-661496 delete pod task-pv-pod: (1.177159631s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-661496 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-661496 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-661496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-661496 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [af37e350-bba6-4a21-979e-7feb0d272110] Pending
helpers_test.go:344: "task-pv-pod-restore" [af37e350-bba6-4a21-979e-7feb0d272110] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [af37e350-bba6-4a21-979e-7feb0d272110] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00431464s
addons_test.go:553: (dbg) Run:  kubectl --context addons-661496 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-661496 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-661496 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.910226534s)
--- PASS: TestAddons/parallel/CSI (50.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-661496 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-661496 --alsologtostderr -v=1: (1.080224259s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-dh4w8" [33741f82-64ea-4566-81b7-4c772bdbdb68] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-dh4w8" [33741f82-64ea-4566-81b7-4c772bdbdb68] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-dh4w8" [33741f82-64ea-4566-81b7-4c772bdbdb68] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006896196s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable headlamp --alsologtostderr -v=1: (5.819691022s)
--- PASS: TestAddons/parallel/Headlamp (17.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-b85f6bbb8-n94gj" [3a4df28f-3162-444a-81e1-3d8bf51f766f] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004982278s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j9pr5" [14fb66ef-5095-4274-8657-2c667308fa0d] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003857248s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-stk8n" [58ecf41d-e287-4d09-9353-fbeb7861b719] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003979212s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-661496 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-661496 addons disable yakd --alsologtostderr -v=1: (6.190361427s)
--- PASS: TestAddons/parallel/Yakd (12.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-661496
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-661496: (1m30.994462728s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-661496
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-661496
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-661496
--- PASS: TestAddons/StoppedEnableDisable (91.29s)

                                                
                                    
x
+
TestCertOptions (54.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-175221 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-175221 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (52.662376409s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-175221 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-175221 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-175221 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-175221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-175221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-175221: (1.027231081s)
--- PASS: TestCertOptions (54.16s)

                                                
                                    
x
+
TestCertExpiration (304.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-409572 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0510 18:58:35.208758 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-409572 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m7.547383758s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-409572 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-409572 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (56.115341581s)
helpers_test.go:175: Cleaning up "cert-expiration-409572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-409572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-409572: (1.21506294s)
--- PASS: TestCertExpiration (304.88s)

                                                
                                    
x
+
TestForceSystemdFlag (75.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-911489 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-911489 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m14.583815155s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-911489 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-911489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-911489
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-911489: (1.089216575s)
--- PASS: TestForceSystemdFlag (75.88s)

                                                
                                    
x
+
TestForceSystemdEnv (49.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-972340 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-972340 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (48.097574253s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-972340 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-972340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-972340
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-972340: (1.064879702s)
--- PASS: TestForceSystemdEnv (49.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0510 18:56:06.112547 1172304 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0510 18:56:06.112714 1172304 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0510 18:56:06.143443 1172304 install.go:62] docker-machine-driver-kvm2: exit status 1
W0510 18:56:06.143630 1172304 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0510 18:56:06.143685 1172304 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2956701186/001/docker-machine-driver-kvm2
I0510 18:56:06.346122 1172304 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2956701186/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960] Decompressors:map[bz2:0xc0005db070 gz:0xc0005db078 tar:0xc0005db020 tar.bz2:0xc0005db030 tar.gz:0xc0005db040 tar.xz:0xc0005db050 tar.zst:0xc0005db060 tbz2:0xc0005db030 tgz:0xc0005db040 txz:0xc0005db050 tzst:0xc0005db060 xz:0xc0005db080 zip:0xc0005db090 zst:0xc0005db088] Getters:map[file:0xc000aaf4c0 http:0xc000522d70 https:0xc000522dc0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0510 18:56:06.346186 1172304 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2956701186/001/docker-machine-driver-kvm2
I0510 18:56:09.127691 1172304 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0510 18:56:09.127797 1172304 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0510 18:56:09.160806 1172304 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0510 18:56:09.160847 1172304 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0510 18:56:09.160918 1172304 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0510 18:56:09.160951 1172304 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2956701186/002/docker-machine-driver-kvm2
I0510 18:56:09.216781 1172304 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2956701186/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960] Decompressors:map[bz2:0xc0005db070 gz:0xc0005db078 tar:0xc0005db020 tar.bz2:0xc0005db030 tar.gz:0xc0005db040 tar.xz:0xc0005db050 tar.zst:0xc0005db060 tbz2:0xc0005db030 tgz:0xc0005db040 txz:0xc0005db050 tzst:0xc0005db060 xz:0xc0005db080 zip:0xc0005db090 zst:0xc0005db088] Getters:map[file:0xc000590570 http:0xc0008c6be0 https:0xc0008c6c30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0510 18:56:09.216851 1172304 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2956701186/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.16s)

                                                
                                    
x
+
TestErrorSpam/setup (44.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-567056 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-567056 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-567056 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-567056 --driver=kvm2  --container-runtime=containerd: (44.538995547s)
--- PASS: TestErrorSpam/setup (44.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (4.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 stop: (1.566903375s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 stop: (1.698474648s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-567056 --log_dir /tmp/nospam-567056 stop: (1.516947339s)
--- PASS: TestErrorSpam/stop (4.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20720-1165049/.minikube/files/etc/test/nested/copy/1172304/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-691821 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-691821 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m29.297084763s)
--- PASS: TestFunctional/serial/StartWithProxy (89.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (49.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0510 17:56:45.647960 1172304 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-691821 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-691821 --alsologtostderr -v=8: (49.869790528s)
functional_test.go:680: soft start took 49.870668204s for "functional-691821" cluster.
I0510 17:57:35.518108 1172304 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
--- PASS: TestFunctional/serial/SoftStart (49.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-691821 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 cache add registry.k8s.io/pause:3.1: (1.125346713s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 cache add registry.k8s.io/pause:3.3: (1.045263029s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 cache add registry.k8s.io/pause:latest: (1.077950163s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-691821 /tmp/TestFunctionalserialCacheCmdcacheadd_local772901467/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cache add minikube-local-cache-test:functional-691821
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 cache add minikube-local-cache-test:functional-691821: (1.65682131s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cache delete minikube-local-cache-test:functional-691821
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-691821
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.132891ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 kubectl -- --context functional-691821 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-691821 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-691821 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0510 17:58:07.065041 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:07.071443 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:07.082861 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:07.104215 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:07.145784 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:07.227316 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:07.388931 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:07.710689 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:08.352850 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:09.634475 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:12.196827 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:17.318766 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-691821 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.016437106s)
functional_test.go:778: restart took 44.016559725s for "functional-691821" cluster.
I0510 17:58:27.117550 1172304 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
--- PASS: TestFunctional/serial/ExtraConfig (44.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-691821 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 logs
E0510 17:58:27.560888 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 logs: (1.345855584s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 logs --file /tmp/TestFunctionalserialLogsFileCmd2122784370/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 logs --file /tmp/TestFunctionalserialLogsFileCmd2122784370/001/logs.txt: (1.403829232s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-691821 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-691821
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-691821: exit status 115 (290.724016ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.96:30520 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-691821 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-691821 delete -f testdata/invalidsvc.yaml: (1.247270408s)
--- PASS: TestFunctional/serial/InvalidService (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 config get cpus: exit status 14 (59.160227ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 config get cpus: exit status 14 (56.079908ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-691821 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-691821 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (145.618288ms)

                                                
                                                
-- stdout --
	* [functional-691821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:58:53.107428 1182261 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:58:53.107738 1182261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.107749 1182261 out.go:358] Setting ErrFile to fd 2...
	I0510 17:58:53.107753 1182261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:53.107955 1182261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:58:53.108575 1182261 out.go:352] Setting JSON to false
	I0510 17:58:53.109725 1182261 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":20477,"bootTime":1746879456,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:58:53.109836 1182261 start.go:140] virtualization: kvm guest
	I0510 17:58:53.111629 1182261 out.go:177] * [functional-691821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:58:53.112993 1182261 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:58:53.113023 1182261 notify.go:220] Checking for updates...
	I0510 17:58:53.115033 1182261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:58:53.116164 1182261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:58:53.117270 1182261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:58:53.118474 1182261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:58:53.119823 1182261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:58:53.121471 1182261 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:58:53.121867 1182261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.121943 1182261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.138997 1182261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34607
	I0510 17:58:53.139512 1182261 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.140118 1182261 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.140140 1182261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.140660 1182261 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.140864 1182261 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.141185 1182261 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:58:53.141724 1182261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:53.141784 1182261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:53.159095 1182261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0510 17:58:53.159642 1182261 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:53.160280 1182261 main.go:141] libmachine: Using API Version  1
	I0510 17:58:53.160304 1182261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:53.160669 1182261 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:53.160874 1182261 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:53.195064 1182261 out.go:177] * Using the kvm2 driver based on existing profile
	I0510 17:58:53.196226 1182261 start.go:304] selected driver: kvm2
	I0510 17:58:53.196253 1182261 start.go:908] validating driver "kvm2" against &{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:53.196397 1182261 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:58:53.198581 1182261 out.go:201] 
	W0510 17:58:53.199772 1182261 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0510 17:58:53.200971 1182261 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-691821 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-691821 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-691821 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (139.405688ms)

                                                
                                                
-- stdout --
	* [functional-691821] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:58:49.682431 1181806 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:58:49.682668 1181806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:49.682676 1181806 out.go:358] Setting ErrFile to fd 2...
	I0510 17:58:49.682680 1181806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:58:49.682989 1181806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 17:58:49.683521 1181806 out.go:352] Setting JSON to false
	I0510 17:58:49.684499 1181806 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":20474,"bootTime":1746879456,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:58:49.684607 1181806 start.go:140] virtualization: kvm guest
	I0510 17:58:49.686556 1181806 out.go:177] * [functional-691821] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0510 17:58:49.687664 1181806 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:58:49.687714 1181806 notify.go:220] Checking for updates...
	I0510 17:58:49.690162 1181806 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:58:49.691309 1181806 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 17:58:49.692646 1181806 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 17:58:49.693745 1181806 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:58:49.694956 1181806 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:58:49.696808 1181806 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 17:58:49.697390 1181806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:49.697481 1181806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:49.712955 1181806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46697
	I0510 17:58:49.713345 1181806 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:49.713920 1181806 main.go:141] libmachine: Using API Version  1
	I0510 17:58:49.713952 1181806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:49.714311 1181806 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:49.714458 1181806 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:49.714714 1181806 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:58:49.715009 1181806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 17:58:49.715069 1181806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 17:58:49.731469 1181806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0510 17:58:49.731990 1181806 main.go:141] libmachine: () Calling .GetVersion
	I0510 17:58:49.732507 1181806 main.go:141] libmachine: Using API Version  1
	I0510 17:58:49.732529 1181806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 17:58:49.732888 1181806 main.go:141] libmachine: () Calling .GetMachineName
	I0510 17:58:49.733060 1181806 main.go:141] libmachine: (functional-691821) Calling .DriverName
	I0510 17:58:49.764828 1181806 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0510 17:58:49.766031 1181806 start.go:304] selected driver: kvm2
	I0510 17:58:49.766046 1181806 start.go:908] validating driver "kvm2" against &{Name:functional-691821 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20720/minikube-v1.35.0-1746739450-20720-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.33.0 ClusterName:functional-691821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:58:49.766177 1181806 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:58:49.768222 1181806 out.go:201] 
	W0510 17:58:49.769366 1181806 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 17:58:49.770540 1181806 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-691821 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-691821 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-s9rfl" [75c1bae4-ff9f-4867-b06a-b8310163a417] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-s9rfl" [75c1bae4-ff9f-4867-b06a-b8310163a417] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003692501s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.96:31390
functional_test.go:1692: http://192.168.39.96:31390: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-s9rfl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.96:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.96:31390
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh -n functional-691821 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cp functional-691821:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2429244187/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh -n functional-691821 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh -n functional-691821 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1172304/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo cat /etc/test/nested/copy/1172304/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1172304.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo cat /etc/ssl/certs/1172304.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1172304.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo cat /usr/share/ca-certificates/1172304.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/11723042.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo cat /etc/ssl/certs/11723042.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/11723042.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo cat /usr/share/ca-certificates/11723042.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-691821 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh "sudo systemctl is-active docker": exit status 1 (227.97702ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh "sudo systemctl is-active crio": exit status 1 (211.22576ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-691821 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.33.0
registry.k8s.io/kube-proxy:v1.33.0
registry.k8s.io/kube-controller-manager:v1.33.0
registry.k8s.io/kube-apiserver:v1.33.0
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.12.0
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-691821
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kicbase/echo-server:functional-691821
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-691821 image ls --format short --alsologtostderr:
I0510 17:59:01.318135 1182936 out.go:345] Setting OutFile to fd 1 ...
I0510 17:59:01.318412 1182936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:01.318422 1182936 out.go:358] Setting ErrFile to fd 2...
I0510 17:59:01.318426 1182936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:01.318606 1182936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
I0510 17:59:01.319206 1182936 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:01.319304 1182936 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:01.319642 1182936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:01.319723 1182936 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:01.335488 1182936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39843
I0510 17:59:01.336039 1182936 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:01.336664 1182936 main.go:141] libmachine: Using API Version  1
I0510 17:59:01.336688 1182936 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:01.337046 1182936 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:01.337245 1182936 main.go:141] libmachine: (functional-691821) Calling .GetState
I0510 17:59:01.339190 1182936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:01.339245 1182936 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:01.354969 1182936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
I0510 17:59:01.355491 1182936 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:01.356017 1182936 main.go:141] libmachine: Using API Version  1
I0510 17:59:01.356053 1182936 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:01.356462 1182936 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:01.356661 1182936 main.go:141] libmachine: (functional-691821) Calling .DriverName
I0510 17:59:01.356961 1182936 ssh_runner.go:195] Run: systemctl --version
I0510 17:59:01.356988 1182936 main.go:141] libmachine: (functional-691821) Calling .GetSSHHostname
I0510 17:59:01.359766 1182936 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:01.360180 1182936 main.go:141] libmachine: (functional-691821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:2e:2b", ip: ""} in network mk-functional-691821: {Iface:virbr1 ExpiryTime:2025-05-10 18:55:31 +0000 UTC Type:0 Mac:52:54:00:9f:2e:2b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-691821 Clientid:01:52:54:00:9f:2e:2b}
I0510 17:59:01.360204 1182936 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined IP address 192.168.39.96 and MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:01.360403 1182936 main.go:141] libmachine: (functional-691821) Calling .GetSSHPort
I0510 17:59:01.360597 1182936 main.go:141] libmachine: (functional-691821) Calling .GetSSHKeyPath
I0510 17:59:01.360739 1182936 main.go:141] libmachine: (functional-691821) Calling .GetSSHUsername
I0510 17:59:01.360866 1182936 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/functional-691821/id_rsa Username:docker}
I0510 17:59:01.443073 1182936 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 17:59:01.480043 1182936 main.go:141] libmachine: Making call to close driver server
I0510 17:59:01.480069 1182936 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:01.480451 1182936 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:01.480479 1182936 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:59:01.480488 1182936 main.go:141] libmachine: Making call to close driver server
I0510 17:59:01.480485 1182936 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:59:01.480494 1182936 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:01.480759 1182936 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:01.480776 1182936 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:59:01.480808 1182936 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-691821 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-691821  | sha256:9056ab | 2.37MB |
| docker.io/library/minikube-local-cache-test | functional-691821  | sha256:8200a5 | 992B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager     | v1.33.0            | sha256:1d579c | 27.6MB |
| registry.k8s.io/kube-scheduler              | v1.33.0            | sha256:8d7258 | 21.8MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.12.0            | sha256:1cf5f1 | 20.9MB |
| registry.k8s.io/etcd                        | 3.5.21-0           | sha256:499038 | 58.9MB |
| registry.k8s.io/kube-apiserver              | v1.33.0            | sha256:6ba954 | 30.1MB |
| localhost/my-image                          | functional-691821  | sha256:67c2ba | 775kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-proxy                  | v1.33.0            | sha256:f1184a | 31.9MB |
| docker.io/kindest/kindnetd                  | v20250214-acbabc1a | sha256:df3849 | 39MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-691821 image ls --format table --alsologtostderr:
I0510 17:59:06.037142 1183102 out.go:345] Setting OutFile to fd 1 ...
I0510 17:59:06.037426 1183102 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:06.037437 1183102 out.go:358] Setting ErrFile to fd 2...
I0510 17:59:06.037441 1183102 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:06.037618 1183102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
I0510 17:59:06.038155 1183102 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:06.038249 1183102 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:06.038585 1183102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:06.038640 1183102 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:06.054095 1183102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
I0510 17:59:06.054549 1183102 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:06.055067 1183102 main.go:141] libmachine: Using API Version  1
I0510 17:59:06.055089 1183102 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:06.055492 1183102 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:06.055718 1183102 main.go:141] libmachine: (functional-691821) Calling .GetState
I0510 17:59:06.057586 1183102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:06.057630 1183102 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:06.073127 1183102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
I0510 17:59:06.073627 1183102 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:06.074087 1183102 main.go:141] libmachine: Using API Version  1
I0510 17:59:06.074107 1183102 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:06.074477 1183102 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:06.074673 1183102 main.go:141] libmachine: (functional-691821) Calling .DriverName
I0510 17:59:06.074900 1183102 ssh_runner.go:195] Run: systemctl --version
I0510 17:59:06.074926 1183102 main.go:141] libmachine: (functional-691821) Calling .GetSSHHostname
I0510 17:59:06.077637 1183102 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:06.078080 1183102 main.go:141] libmachine: (functional-691821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:2e:2b", ip: ""} in network mk-functional-691821: {Iface:virbr1 ExpiryTime:2025-05-10 18:55:31 +0000 UTC Type:0 Mac:52:54:00:9f:2e:2b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-691821 Clientid:01:52:54:00:9f:2e:2b}
I0510 17:59:06.078107 1183102 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined IP address 192.168.39.96 and MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:06.078252 1183102 main.go:141] libmachine: (functional-691821) Calling .GetSSHPort
I0510 17:59:06.078431 1183102 main.go:141] libmachine: (functional-691821) Calling .GetSSHKeyPath
I0510 17:59:06.078699 1183102 main.go:141] libmachine: (functional-691821) Calling .GetSSHUsername
I0510 17:59:06.078837 1183102 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/functional-691821/id_rsa Username:docker}
I0510 17:59:06.163303 1183102 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 17:59:06.205128 1183102 main.go:141] libmachine: Making call to close driver server
I0510 17:59:06.205146 1183102 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:06.205453 1183102 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:06.205475 1183102 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:59:06.205474 1183102 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:59:06.205488 1183102 main.go:141] libmachine: Making call to close driver server
I0510 17:59:06.205496 1183102 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:06.205821 1183102 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:06.205849 1183102 main.go:141] libmachine: Making call to close connection to plugin binary
E0510 17:59:29.004586 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:00:50.926423 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-691821 image ls --format json --alsologtostderr:
[{"id":"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":["registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121"],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"58938593"},{"id":"sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32"],"repoTags":["registry.k8s.io/kube-apiserver:v1.33.0"],"size":"30071307"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:8200a51f948add13b6043377ccb432f2878d62a976b193d9793cde32d272d672",
"repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-691821"],"size":"992"},{"id":"sha256:67c2bac149325ea16357da64538e096ba655312faf5b71009c978a48ff68f539","repoDigests":[],"repoTags":["localhost/my-image:functional-691821"],"size":"774888"},{"id":"sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.33.0"],"size":"27635030"},{"id":"sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68","repoDigests":["registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b"],"repoTags":["registry.k8s.io/kube-proxy:v1.33.0"],"size":"31887726"},{"id":"sha256:df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d4
95"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"38996835"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f"],"repoTags":["registry.k8s.io/kube-scheduler:v1.33.0"],"size":"21776484"},{"id":"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b","repoDigests":["registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.0"],"size":"20939036"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoT
ags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-691821"],"size":"2372971"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-691821 image ls --format json --alsologtostderr:
I0510 17:59:05.816818 1183078 out.go:345] Setting OutFile to fd 1 ...
I0510 17:59:05.816920 1183078 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:05.816927 1183078 out.go:358] Setting ErrFile to fd 2...
I0510 17:59:05.816931 1183078 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:05.817140 1183078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
I0510 17:59:05.817748 1183078 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:05.817936 1183078 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:05.818405 1183078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:05.818487 1183078 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:05.835182 1183078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42605
I0510 17:59:05.835937 1183078 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:05.836615 1183078 main.go:141] libmachine: Using API Version  1
I0510 17:59:05.836650 1183078 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:05.837031 1183078 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:05.837243 1183078 main.go:141] libmachine: (functional-691821) Calling .GetState
I0510 17:59:05.839809 1183078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:05.839864 1183078 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:05.855970 1183078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
I0510 17:59:05.856564 1183078 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:05.857103 1183078 main.go:141] libmachine: Using API Version  1
I0510 17:59:05.857128 1183078 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:05.857531 1183078 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:05.857746 1183078 main.go:141] libmachine: (functional-691821) Calling .DriverName
I0510 17:59:05.858021 1183078 ssh_runner.go:195] Run: systemctl --version
I0510 17:59:05.858052 1183078 main.go:141] libmachine: (functional-691821) Calling .GetSSHHostname
I0510 17:59:05.861112 1183078 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:05.861591 1183078 main.go:141] libmachine: (functional-691821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:2e:2b", ip: ""} in network mk-functional-691821: {Iface:virbr1 ExpiryTime:2025-05-10 18:55:31 +0000 UTC Type:0 Mac:52:54:00:9f:2e:2b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-691821 Clientid:01:52:54:00:9f:2e:2b}
I0510 17:59:05.861621 1183078 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined IP address 192.168.39.96 and MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:05.861807 1183078 main.go:141] libmachine: (functional-691821) Calling .GetSSHPort
I0510 17:59:05.861989 1183078 main.go:141] libmachine: (functional-691821) Calling .GetSSHKeyPath
I0510 17:59:05.862131 1183078 main.go:141] libmachine: (functional-691821) Calling .GetSSHUsername
I0510 17:59:05.862274 1183078 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/functional-691821/id_rsa Username:docker}
I0510 17:59:05.946700 1183078 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 17:59:05.984437 1183078 main.go:141] libmachine: Making call to close driver server
I0510 17:59:05.984451 1183078 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:05.984747 1183078 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:05.984766 1183078 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:59:05.984777 1183078 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:59:05.984795 1183078 main.go:141] libmachine: Making call to close driver server
I0510 17:59:05.984805 1183078 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:05.985046 1183078 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:05.985075 1183078 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:59:05.985101 1183078 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-691821 image ls --format yaml --alsologtostderr:
- id: sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a
repoTags:
- registry.k8s.io/kube-controller-manager:v1.33.0
size: "27635030"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:8200a51f948add13b6043377ccb432f2878d62a976b193d9793cde32d272d672
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-691821
size: "992"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32
repoTags:
- registry.k8s.io/kube-apiserver:v1.33.0
size: "30071307"
- id: sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f
repoTags:
- registry.k8s.io/kube-scheduler:v1.33.0
size: "21776484"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-691821
size: "2372971"
- id: sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.0
size: "20939036"
- id: sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68
repoDigests:
- registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b
repoTags:
- registry.k8s.io/kube-proxy:v1.33.0
size: "31887726"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "38996835"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests:
- registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "58938593"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-691821 image ls --format yaml --alsologtostderr:
I0510 17:59:01.533732 1182960 out.go:345] Setting OutFile to fd 1 ...
I0510 17:59:01.534023 1182960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:01.534035 1182960 out.go:358] Setting ErrFile to fd 2...
I0510 17:59:01.534038 1182960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:01.534214 1182960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
I0510 17:59:01.534814 1182960 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:01.534917 1182960 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:01.535306 1182960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:01.535377 1182960 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:01.551382 1182960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45047
I0510 17:59:01.551869 1182960 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:01.552543 1182960 main.go:141] libmachine: Using API Version  1
I0510 17:59:01.552574 1182960 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:01.552954 1182960 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:01.553102 1182960 main.go:141] libmachine: (functional-691821) Calling .GetState
I0510 17:59:01.554882 1182960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:01.554929 1182960 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:01.570761 1182960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
I0510 17:59:01.571292 1182960 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:01.571791 1182960 main.go:141] libmachine: Using API Version  1
I0510 17:59:01.571814 1182960 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:01.572257 1182960 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:01.572475 1182960 main.go:141] libmachine: (functional-691821) Calling .DriverName
I0510 17:59:01.572677 1182960 ssh_runner.go:195] Run: systemctl --version
I0510 17:59:01.572703 1182960 main.go:141] libmachine: (functional-691821) Calling .GetSSHHostname
I0510 17:59:01.575662 1182960 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:01.576073 1182960 main.go:141] libmachine: (functional-691821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:2e:2b", ip: ""} in network mk-functional-691821: {Iface:virbr1 ExpiryTime:2025-05-10 18:55:31 +0000 UTC Type:0 Mac:52:54:00:9f:2e:2b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-691821 Clientid:01:52:54:00:9f:2e:2b}
I0510 17:59:01.576102 1182960 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined IP address 192.168.39.96 and MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:01.576259 1182960 main.go:141] libmachine: (functional-691821) Calling .GetSSHPort
I0510 17:59:01.576420 1182960 main.go:141] libmachine: (functional-691821) Calling .GetSSHKeyPath
I0510 17:59:01.576580 1182960 main.go:141] libmachine: (functional-691821) Calling .GetSSHUsername
I0510 17:59:01.576769 1182960 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/functional-691821/id_rsa Username:docker}
I0510 17:59:01.658312 1182960 ssh_runner.go:195] Run: sudo crictl images --output json
I0510 17:59:01.694274 1182960 main.go:141] libmachine: Making call to close driver server
I0510 17:59:01.694293 1182960 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:01.694614 1182960 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:01.694637 1182960 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:59:01.694663 1182960 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:59:01.694731 1182960 main.go:141] libmachine: Making call to close driver server
I0510 17:59:01.694744 1182960 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:01.695011 1182960 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:59:01.695051 1182960 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:01.695061 1182960 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh pgrep buildkitd: exit status 1 (205.235934ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image build -t localhost/my-image:functional-691821 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 image build -t localhost/my-image:functional-691821 testdata/build --alsologtostderr: (3.636831571s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-691821 image build -t localhost/my-image:functional-691821 testdata/build --alsologtostderr:
I0510 17:59:01.951307 1183030 out.go:345] Setting OutFile to fd 1 ...
I0510 17:59:01.951590 1183030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:01.951601 1183030 out.go:358] Setting ErrFile to fd 2...
I0510 17:59:01.951605 1183030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:59:01.951792 1183030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
I0510 17:59:01.952388 1183030 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:01.952992 1183030 config.go:182] Loaded profile config "functional-691821": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
I0510 17:59:01.953352 1183030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:01.953412 1183030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:01.971034 1183030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
I0510 17:59:01.971645 1183030 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:01.972265 1183030 main.go:141] libmachine: Using API Version  1
I0510 17:59:01.972299 1183030 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:01.972711 1183030 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:01.972901 1183030 main.go:141] libmachine: (functional-691821) Calling .GetState
I0510 17:59:01.974878 1183030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0510 17:59:01.974928 1183030 main.go:141] libmachine: Launching plugin server for driver kvm2
I0510 17:59:01.991467 1183030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38561
I0510 17:59:01.992144 1183030 main.go:141] libmachine: () Calling .GetVersion
I0510 17:59:01.992781 1183030 main.go:141] libmachine: Using API Version  1
I0510 17:59:01.992809 1183030 main.go:141] libmachine: () Calling .SetConfigRaw
I0510 17:59:01.993245 1183030 main.go:141] libmachine: () Calling .GetMachineName
I0510 17:59:01.993476 1183030 main.go:141] libmachine: (functional-691821) Calling .DriverName
I0510 17:59:01.993786 1183030 ssh_runner.go:195] Run: systemctl --version
I0510 17:59:01.993815 1183030 main.go:141] libmachine: (functional-691821) Calling .GetSSHHostname
I0510 17:59:01.996928 1183030 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:01.997423 1183030 main.go:141] libmachine: (functional-691821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:2e:2b", ip: ""} in network mk-functional-691821: {Iface:virbr1 ExpiryTime:2025-05-10 18:55:31 +0000 UTC Type:0 Mac:52:54:00:9f:2e:2b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-691821 Clientid:01:52:54:00:9f:2e:2b}
I0510 17:59:01.997456 1183030 main.go:141] libmachine: (functional-691821) DBG | domain functional-691821 has defined IP address 192.168.39.96 and MAC address 52:54:00:9f:2e:2b in network mk-functional-691821
I0510 17:59:01.997641 1183030 main.go:141] libmachine: (functional-691821) Calling .GetSSHPort
I0510 17:59:01.997829 1183030 main.go:141] libmachine: (functional-691821) Calling .GetSSHKeyPath
I0510 17:59:01.998017 1183030 main.go:141] libmachine: (functional-691821) Calling .GetSSHUsername
I0510 17:59:01.998186 1183030 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/functional-691821/id_rsa Username:docker}
I0510 17:59:02.086192 1183030 build_images.go:161] Building image from path: /tmp/build.2899807203.tar
I0510 17:59:02.086265 1183030 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0510 17:59:02.097353 1183030 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2899807203.tar
I0510 17:59:02.102624 1183030 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2899807203.tar: stat -c "%s %y" /var/lib/minikube/build/build.2899807203.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2899807203.tar': No such file or directory
I0510 17:59:02.102656 1183030 ssh_runner.go:362] scp /tmp/build.2899807203.tar --> /var/lib/minikube/build/build.2899807203.tar (3072 bytes)
I0510 17:59:02.132595 1183030 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2899807203
I0510 17:59:02.143059 1183030 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2899807203 -xf /var/lib/minikube/build/build.2899807203.tar
I0510 17:59:02.153006 1183030 containerd.go:394] Building image: /var/lib/minikube/build/build.2899807203
I0510 17:59:02.153101 1183030 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2899807203 --local dockerfile=/var/lib/minikube/build/build.2899807203 --output type=image,name=localhost/my-image:functional-691821
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:70e094eb845a5efcb348766e883bd25dc55411ef56237d3737894ac6ba63ec03
#8 exporting manifest sha256:70e094eb845a5efcb348766e883bd25dc55411ef56237d3737894ac6ba63ec03 0.0s done
#8 exporting config sha256:67c2bac149325ea16357da64538e096ba655312faf5b71009c978a48ff68f539 0.0s done
#8 naming to localhost/my-image:functional-691821 done
#8 DONE 0.2s
I0510 17:59:05.504579 1183030 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2899807203 --local dockerfile=/var/lib/minikube/build/build.2899807203 --output type=image,name=localhost/my-image:functional-691821: (3.351419501s)
I0510 17:59:05.504663 1183030 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2899807203
I0510 17:59:05.520232 1183030 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2899807203.tar
I0510 17:59:05.536186 1183030 build_images.go:217] Built localhost/my-image:functional-691821 from /tmp/build.2899807203.tar
I0510 17:59:05.536226 1183030 build_images.go:133] succeeded building to: functional-691821
I0510 17:59:05.536231 1183030 build_images.go:134] failed building to: 
I0510 17:59:05.536256 1183030 main.go:141] libmachine: Making call to close driver server
I0510 17:59:05.536285 1183030 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:05.536621 1183030 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:05.536641 1183030 main.go:141] libmachine: Making call to close connection to plugin binary
I0510 17:59:05.536653 1183030 main.go:141] libmachine: Making call to close driver server
I0510 17:59:05.536660 1183030 main.go:141] libmachine: (functional-691821) Calling .Close
I0510 17:59:05.536662 1183030 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:59:05.536906 1183030 main.go:141] libmachine: Successfully made call to close driver server
I0510 17:59:05.536923 1183030 main.go:141] libmachine: (functional-691821) DBG | Closing plugin on server side
I0510 17:59:05.536930 1183030 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.676558509s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-691821
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image load --daemon kicbase/echo-server:functional-691821 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 image load --daemon kicbase/echo-server:functional-691821 --alsologtostderr: (1.153180483s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "290.833909ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "50.920149ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "293.266309ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "53.722039ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image load --daemon kicbase/echo-server:functional-691821 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 image load --daemon kicbase/echo-server:functional-691821 --alsologtostderr: (1.072625516s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-691821
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image load --daemon kicbase/echo-server:functional-691821 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-691821 image load --daemon kicbase/echo-server:functional-691821 --alsologtostderr: (1.133255343s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image save kicbase/echo-server:functional-691821 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image rm kicbase/echo-server:functional-691821 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-691821
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 image save --daemon kicbase/echo-server:functional-691821 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-691821
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-691821 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-691821 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-dmfws" [28a6f3ac-1535-4962-bd26-6b2e8fc70e01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-dmfws" [28a6f3ac-1535-4962-bd26-6b2e8fc70e01] Running
E0510 17:58:48.042602 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004257394s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdany-port447315085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1746899929777236071" to /tmp/TestFunctionalparallelMountCmdany-port447315085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1746899929777236071" to /tmp/TestFunctionalparallelMountCmdany-port447315085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1746899929777236071" to /tmp/TestFunctionalparallelMountCmdany-port447315085/001/test-1746899929777236071
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (206.429697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 17:58:49.983994 1172304 retry.go:31] will retry after 283.161463ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 10 17:58 created-by-test
-rw-r--r-- 1 docker docker 24 May 10 17:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 10 17:58 test-1746899929777236071
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh cat /mount-9p/test-1746899929777236071
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-691821 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c8cc60d6-b579-40c3-998c-14cd4f5cc413] Pending
helpers_test.go:344: "busybox-mount" [c8cc60d6-b579-40c3-998c-14cd4f5cc413] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c8cc60d6-b579-40c3-998c-14cd4f5cc413] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c8cc60d6-b579-40c3-998c-14cd4f5cc413] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005037318s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-691821 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdany-port447315085/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 service list -o json
functional_test.go:1511: Took "441.096714ms" to run "out/minikube-linux-amd64 -p functional-691821 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.96:31335
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.96:31335
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdspecific-port4113061102/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.207073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 17:58:57.213627 1172304 retry.go:31] will retry after 669.186434ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdspecific-port4113061102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh "sudo umount -f /mount-9p": exit status 1 (198.202491ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-691821 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdspecific-port4113061102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T" /mount1: exit status 1 (244.634628ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 17:58:59.143456 1172304 retry.go:31] will retry after 662.003817ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-691821 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-691821 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-691821 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1727294169/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-691821
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-691821
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-691821
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (3m26.79232525s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 kubectl -- rollout status deployment/busybox: (4.357548119s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-5dmkz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-gh86s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-zh7tv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-5dmkz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-gh86s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-zh7tv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-5dmkz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-gh86s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-zh7tv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-5dmkz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-5dmkz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-gh86s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-gh86s -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-zh7tv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 kubectl -- exec busybox-58667487b6-zh7tv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 node add --alsologtostderr -v 5: (51.04367392s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-112386 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --output json --alsologtostderr -v 5
E0510 18:13:07.055709 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp testdata/cp-test.txt ha-112386:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1600617922/001/cp-test_ha-112386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386:/home/docker/cp-test.txt ha-112386-m02:/home/docker/cp-test_ha-112386_ha-112386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test_ha-112386_ha-112386-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386:/home/docker/cp-test.txt ha-112386-m03:/home/docker/cp-test_ha-112386_ha-112386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test_ha-112386_ha-112386-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386:/home/docker/cp-test.txt ha-112386-m04:/home/docker/cp-test_ha-112386_ha-112386-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test_ha-112386_ha-112386-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp testdata/cp-test.txt ha-112386-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1600617922/001/cp-test_ha-112386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m02:/home/docker/cp-test.txt ha-112386:/home/docker/cp-test_ha-112386-m02_ha-112386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test_ha-112386-m02_ha-112386.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m02:/home/docker/cp-test.txt ha-112386-m03:/home/docker/cp-test_ha-112386-m02_ha-112386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test_ha-112386-m02_ha-112386-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m02:/home/docker/cp-test.txt ha-112386-m04:/home/docker/cp-test_ha-112386-m02_ha-112386-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test_ha-112386-m02_ha-112386-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp testdata/cp-test.txt ha-112386-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1600617922/001/cp-test_ha-112386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m03:/home/docker/cp-test.txt ha-112386:/home/docker/cp-test_ha-112386-m03_ha-112386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test_ha-112386-m03_ha-112386.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m03:/home/docker/cp-test.txt ha-112386-m02:/home/docker/cp-test_ha-112386-m03_ha-112386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test_ha-112386-m03_ha-112386-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m03:/home/docker/cp-test.txt ha-112386-m04:/home/docker/cp-test_ha-112386-m03_ha-112386-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test_ha-112386-m03_ha-112386-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp testdata/cp-test.txt ha-112386-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1600617922/001/cp-test_ha-112386-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m04:/home/docker/cp-test.txt ha-112386:/home/docker/cp-test_ha-112386-m04_ha-112386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386 "sudo cat /home/docker/cp-test_ha-112386-m04_ha-112386.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m04:/home/docker/cp-test.txt ha-112386-m02:/home/docker/cp-test_ha-112386-m04_ha-112386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m02 "sudo cat /home/docker/cp-test_ha-112386-m04_ha-112386-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 cp ha-112386-m04:/home/docker/cp-test.txt ha-112386-m03:/home/docker/cp-test_ha-112386-m04_ha-112386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 ssh -n ha-112386-m03 "sudo cat /home/docker/cp-test_ha-112386-m04_ha-112386-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 node stop m02 --alsologtostderr -v 5
E0510 18:13:35.208911 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:35.215323 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:35.226680 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:35.248129 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:35.289610 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:35.371341 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:35.532958 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:35.854804 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:36.496991 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:37.778554 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:40.340040 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:45.461692 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:13:55.703525 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:14:16.185562 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:14:30.129863 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 node stop m02 --alsologtostderr -v 5: (1m30.998628144s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5: exit status 7 (668.957674ms)

                                                
                                                
-- stdout --
	ha-112386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-112386-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-112386-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-112386-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:14:51.103402 1190000 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:14:51.103510 1190000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:14:51.103515 1190000 out.go:358] Setting ErrFile to fd 2...
	I0510 18:14:51.103519 1190000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:14:51.103775 1190000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 18:14:51.103958 1190000 out.go:352] Setting JSON to false
	I0510 18:14:51.103998 1190000 mustload.go:65] Loading cluster: ha-112386
	I0510 18:14:51.104114 1190000 notify.go:220] Checking for updates...
	I0510 18:14:51.104424 1190000 config.go:182] Loaded profile config "ha-112386": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 18:14:51.104448 1190000 status.go:174] checking status of ha-112386 ...
	I0510 18:14:51.104891 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.104930 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.121690 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0510 18:14:51.122136 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.122716 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.122742 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.123141 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.123385 1190000 main.go:141] libmachine: (ha-112386) Calling .GetState
	I0510 18:14:51.124930 1190000 status.go:371] ha-112386 host status = "Running" (err=<nil>)
	I0510 18:14:51.124948 1190000 host.go:66] Checking if "ha-112386" exists ...
	I0510 18:14:51.125262 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.125302 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.140293 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0510 18:14:51.140749 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.141229 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.141256 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.141575 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.141768 1190000 main.go:141] libmachine: (ha-112386) Calling .GetIP
	I0510 18:14:51.144608 1190000 main.go:141] libmachine: (ha-112386) DBG | domain ha-112386 has defined MAC address 52:54:00:92:05:02 in network mk-ha-112386
	I0510 18:14:51.145074 1190000 main.go:141] libmachine: (ha-112386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:05:02", ip: ""} in network mk-ha-112386: {Iface:virbr1 ExpiryTime:2025-05-10 19:08:53 +0000 UTC Type:0 Mac:52:54:00:92:05:02 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-112386 Clientid:01:52:54:00:92:05:02}
	I0510 18:14:51.145098 1190000 main.go:141] libmachine: (ha-112386) DBG | domain ha-112386 has defined IP address 192.168.39.155 and MAC address 52:54:00:92:05:02 in network mk-ha-112386
	I0510 18:14:51.145264 1190000 host.go:66] Checking if "ha-112386" exists ...
	I0510 18:14:51.145579 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.145636 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.161330 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0510 18:14:51.161869 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.162382 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.162403 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.162720 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.162924 1190000 main.go:141] libmachine: (ha-112386) Calling .DriverName
	I0510 18:14:51.163162 1190000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:14:51.163193 1190000 main.go:141] libmachine: (ha-112386) Calling .GetSSHHostname
	I0510 18:14:51.165879 1190000 main.go:141] libmachine: (ha-112386) DBG | domain ha-112386 has defined MAC address 52:54:00:92:05:02 in network mk-ha-112386
	I0510 18:14:51.166326 1190000 main.go:141] libmachine: (ha-112386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:05:02", ip: ""} in network mk-ha-112386: {Iface:virbr1 ExpiryTime:2025-05-10 19:08:53 +0000 UTC Type:0 Mac:52:54:00:92:05:02 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:ha-112386 Clientid:01:52:54:00:92:05:02}
	I0510 18:14:51.166348 1190000 main.go:141] libmachine: (ha-112386) DBG | domain ha-112386 has defined IP address 192.168.39.155 and MAC address 52:54:00:92:05:02 in network mk-ha-112386
	I0510 18:14:51.166478 1190000 main.go:141] libmachine: (ha-112386) Calling .GetSSHPort
	I0510 18:14:51.166642 1190000 main.go:141] libmachine: (ha-112386) Calling .GetSSHKeyPath
	I0510 18:14:51.166774 1190000 main.go:141] libmachine: (ha-112386) Calling .GetSSHUsername
	I0510 18:14:51.166908 1190000 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/ha-112386/id_rsa Username:docker}
	I0510 18:14:51.257511 1190000 ssh_runner.go:195] Run: systemctl --version
	I0510 18:14:51.263817 1190000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:14:51.283171 1190000 kubeconfig.go:125] found "ha-112386" server: "https://192.168.39.254:8443"
	I0510 18:14:51.283222 1190000 api_server.go:166] Checking apiserver status ...
	I0510 18:14:51.283260 1190000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 18:14:51.303078 1190000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0510 18:14:51.314413 1190000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0510 18:14:51.314477 1190000 ssh_runner.go:195] Run: ls
	I0510 18:14:51.319057 1190000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0510 18:14:51.326696 1190000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0510 18:14:51.326719 1190000 status.go:463] ha-112386 apiserver status = Running (err=<nil>)
	I0510 18:14:51.326729 1190000 status.go:176] ha-112386 status: &{Name:ha-112386 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:14:51.326747 1190000 status.go:174] checking status of ha-112386-m02 ...
	I0510 18:14:51.327136 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.327193 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.345134 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I0510 18:14:51.345615 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.346063 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.346088 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.346480 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.346685 1190000 main.go:141] libmachine: (ha-112386-m02) Calling .GetState
	I0510 18:14:51.348349 1190000 status.go:371] ha-112386-m02 host status = "Stopped" (err=<nil>)
	I0510 18:14:51.348367 1190000 status.go:384] host is not running, skipping remaining checks
	I0510 18:14:51.348374 1190000 status.go:176] ha-112386-m02 status: &{Name:ha-112386-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:14:51.348390 1190000 status.go:174] checking status of ha-112386-m03 ...
	I0510 18:14:51.348714 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.348758 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.365226 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0510 18:14:51.365764 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.366356 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.366385 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.366721 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.366922 1190000 main.go:141] libmachine: (ha-112386-m03) Calling .GetState
	I0510 18:14:51.368592 1190000 status.go:371] ha-112386-m03 host status = "Running" (err=<nil>)
	I0510 18:14:51.368609 1190000 host.go:66] Checking if "ha-112386-m03" exists ...
	I0510 18:14:51.368896 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.368933 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.385582 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0510 18:14:51.386033 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.386490 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.386516 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.386880 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.387089 1190000 main.go:141] libmachine: (ha-112386-m03) Calling .GetIP
	I0510 18:14:51.390237 1190000 main.go:141] libmachine: (ha-112386-m03) DBG | domain ha-112386-m03 has defined MAC address 52:54:00:7c:ad:20 in network mk-ha-112386
	I0510 18:14:51.390715 1190000 main.go:141] libmachine: (ha-112386-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ad:20", ip: ""} in network mk-ha-112386: {Iface:virbr1 ExpiryTime:2025-05-10 19:10:58 +0000 UTC Type:0 Mac:52:54:00:7c:ad:20 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-112386-m03 Clientid:01:52:54:00:7c:ad:20}
	I0510 18:14:51.390756 1190000 main.go:141] libmachine: (ha-112386-m03) DBG | domain ha-112386-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:7c:ad:20 in network mk-ha-112386
	I0510 18:14:51.390919 1190000 host.go:66] Checking if "ha-112386-m03" exists ...
	I0510 18:14:51.391220 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.391263 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.406753 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
	I0510 18:14:51.407342 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.407936 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.407965 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.408369 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.408572 1190000 main.go:141] libmachine: (ha-112386-m03) Calling .DriverName
	I0510 18:14:51.408760 1190000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:14:51.408780 1190000 main.go:141] libmachine: (ha-112386-m03) Calling .GetSSHHostname
	I0510 18:14:51.411793 1190000 main.go:141] libmachine: (ha-112386-m03) DBG | domain ha-112386-m03 has defined MAC address 52:54:00:7c:ad:20 in network mk-ha-112386
	I0510 18:14:51.412293 1190000 main.go:141] libmachine: (ha-112386-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:ad:20", ip: ""} in network mk-ha-112386: {Iface:virbr1 ExpiryTime:2025-05-10 19:10:58 +0000 UTC Type:0 Mac:52:54:00:7c:ad:20 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-112386-m03 Clientid:01:52:54:00:7c:ad:20}
	I0510 18:14:51.412313 1190000 main.go:141] libmachine: (ha-112386-m03) DBG | domain ha-112386-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:7c:ad:20 in network mk-ha-112386
	I0510 18:14:51.412550 1190000 main.go:141] libmachine: (ha-112386-m03) Calling .GetSSHPort
	I0510 18:14:51.412746 1190000 main.go:141] libmachine: (ha-112386-m03) Calling .GetSSHKeyPath
	I0510 18:14:51.412902 1190000 main.go:141] libmachine: (ha-112386-m03) Calling .GetSSHUsername
	I0510 18:14:51.413066 1190000 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/ha-112386-m03/id_rsa Username:docker}
	I0510 18:14:51.504485 1190000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:14:51.521873 1190000 kubeconfig.go:125] found "ha-112386" server: "https://192.168.39.254:8443"
	I0510 18:14:51.521907 1190000 api_server.go:166] Checking apiserver status ...
	I0510 18:14:51.521969 1190000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 18:14:51.539797 1190000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	W0510 18:14:51.551283 1190000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0510 18:14:51.551371 1190000 ssh_runner.go:195] Run: ls
	I0510 18:14:51.555579 1190000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0510 18:14:51.559865 1190000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0510 18:14:51.559893 1190000 status.go:463] ha-112386-m03 apiserver status = Running (err=<nil>)
	I0510 18:14:51.559901 1190000 status.go:176] ha-112386-m03 status: &{Name:ha-112386-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:14:51.559920 1190000 status.go:174] checking status of ha-112386-m04 ...
	I0510 18:14:51.560292 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.560323 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.575987 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0510 18:14:51.576534 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.577015 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.577036 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.577450 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.577637 1190000 main.go:141] libmachine: (ha-112386-m04) Calling .GetState
	I0510 18:14:51.579638 1190000 status.go:371] ha-112386-m04 host status = "Running" (err=<nil>)
	I0510 18:14:51.579661 1190000 host.go:66] Checking if "ha-112386-m04" exists ...
	I0510 18:14:51.580049 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.580102 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.595901 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0510 18:14:51.596455 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.596879 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.596903 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.597416 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.597651 1190000 main.go:141] libmachine: (ha-112386-m04) Calling .GetIP
	I0510 18:14:51.600828 1190000 main.go:141] libmachine: (ha-112386-m04) DBG | domain ha-112386-m04 has defined MAC address 52:54:00:8d:06:d1 in network mk-ha-112386
	I0510 18:14:51.601381 1190000 main.go:141] libmachine: (ha-112386-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:06:d1", ip: ""} in network mk-ha-112386: {Iface:virbr1 ExpiryTime:2025-05-10 19:12:29 +0000 UTC Type:0 Mac:52:54:00:8d:06:d1 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-112386-m04 Clientid:01:52:54:00:8d:06:d1}
	I0510 18:14:51.601420 1190000 main.go:141] libmachine: (ha-112386-m04) DBG | domain ha-112386-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:8d:06:d1 in network mk-ha-112386
	I0510 18:14:51.601643 1190000 host.go:66] Checking if "ha-112386-m04" exists ...
	I0510 18:14:51.602132 1190000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:14:51.602190 1190000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:14:51.618550 1190000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0510 18:14:51.619008 1190000 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:14:51.619544 1190000 main.go:141] libmachine: Using API Version  1
	I0510 18:14:51.619566 1190000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:14:51.619940 1190000 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:14:51.620166 1190000 main.go:141] libmachine: (ha-112386-m04) Calling .DriverName
	I0510 18:14:51.620370 1190000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:14:51.620394 1190000 main.go:141] libmachine: (ha-112386-m04) Calling .GetSSHHostname
	I0510 18:14:51.623325 1190000 main.go:141] libmachine: (ha-112386-m04) DBG | domain ha-112386-m04 has defined MAC address 52:54:00:8d:06:d1 in network mk-ha-112386
	I0510 18:14:51.623698 1190000 main.go:141] libmachine: (ha-112386-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:06:d1", ip: ""} in network mk-ha-112386: {Iface:virbr1 ExpiryTime:2025-05-10 19:12:29 +0000 UTC Type:0 Mac:52:54:00:8d:06:d1 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-112386-m04 Clientid:01:52:54:00:8d:06:d1}
	I0510 18:14:51.623725 1190000 main.go:141] libmachine: (ha-112386-m04) DBG | domain ha-112386-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:8d:06:d1 in network mk-ha-112386
	I0510 18:14:51.623847 1190000 main.go:141] libmachine: (ha-112386-m04) Calling .GetSSHPort
	I0510 18:14:51.624039 1190000 main.go:141] libmachine: (ha-112386-m04) Calling .GetSSHKeyPath
	I0510 18:14:51.624233 1190000 main.go:141] libmachine: (ha-112386-m04) Calling .GetSSHUsername
	I0510 18:14:51.624409 1190000 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/ha-112386-m04/id_rsa Username:docker}
	I0510 18:14:51.704900 1190000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:14:51.722452 1190000 status.go:176] ha-112386-m04 status: &{Name:ha-112386-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 node start m02 --alsologtostderr -v 5
E0510 18:14:57.146968 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 node start m02 --alsologtostderr -v 5: (22.942360899s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.100079733s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (408.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 stop --alsologtostderr -v 5
E0510 18:16:19.069304 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:18:07.056258 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:18:35.208853 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:19:02.911142 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 stop --alsologtostderr -v 5: (4m34.388483456s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 start --wait true --alsologtostderr -v 5: (2m14.195232613s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (408.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 node delete m03 --alsologtostderr -v 5: (6.286450796s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 stop --alsologtostderr -v 5
E0510 18:23:07.056439 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:23:35.208259 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 stop --alsologtostderr -v 5: (4m32.871858791s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5: exit status 7 (114.271983ms)

                                                
                                                
-- stdout --
	ha-112386
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-112386-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-112386-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:26:46.854111 1194120 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:26:46.854380 1194120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:26:46.854388 1194120 out.go:358] Setting ErrFile to fd 2...
	I0510 18:26:46.854392 1194120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:26:46.854590 1194120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 18:26:46.854745 1194120 out.go:352] Setting JSON to false
	I0510 18:26:46.854782 1194120 mustload.go:65] Loading cluster: ha-112386
	I0510 18:26:46.854881 1194120 notify.go:220] Checking for updates...
	I0510 18:26:46.855167 1194120 config.go:182] Loaded profile config "ha-112386": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 18:26:46.855196 1194120 status.go:174] checking status of ha-112386 ...
	I0510 18:26:46.856302 1194120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:26:46.856360 1194120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:26:46.879357 1194120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0510 18:26:46.879887 1194120 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:26:46.880479 1194120 main.go:141] libmachine: Using API Version  1
	I0510 18:26:46.880500 1194120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:26:46.880934 1194120 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:26:46.881162 1194120 main.go:141] libmachine: (ha-112386) Calling .GetState
	I0510 18:26:46.882861 1194120 status.go:371] ha-112386 host status = "Stopped" (err=<nil>)
	I0510 18:26:46.882875 1194120 status.go:384] host is not running, skipping remaining checks
	I0510 18:26:46.882883 1194120 status.go:176] ha-112386 status: &{Name:ha-112386 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:26:46.882905 1194120 status.go:174] checking status of ha-112386-m02 ...
	I0510 18:26:46.883239 1194120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:26:46.883265 1194120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:26:46.898183 1194120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0510 18:26:46.898591 1194120 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:26:46.899038 1194120 main.go:141] libmachine: Using API Version  1
	I0510 18:26:46.899063 1194120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:26:46.899463 1194120 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:26:46.899662 1194120 main.go:141] libmachine: (ha-112386-m02) Calling .GetState
	I0510 18:26:46.901208 1194120 status.go:371] ha-112386-m02 host status = "Stopped" (err=<nil>)
	I0510 18:26:46.901223 1194120 status.go:384] host is not running, skipping remaining checks
	I0510 18:26:46.901229 1194120 status.go:176] ha-112386-m02 status: &{Name:ha-112386-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:26:46.901247 1194120 status.go:174] checking status of ha-112386-m04 ...
	I0510 18:26:46.901556 1194120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:26:46.901612 1194120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:26:46.916276 1194120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40327
	I0510 18:26:46.916727 1194120 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:26:46.917287 1194120 main.go:141] libmachine: Using API Version  1
	I0510 18:26:46.917313 1194120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:26:46.917664 1194120 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:26:46.917852 1194120 main.go:141] libmachine: (ha-112386-m04) Calling .GetState
	I0510 18:26:46.919225 1194120 status.go:371] ha-112386-m04 host status = "Stopped" (err=<nil>)
	I0510 18:26:46.919241 1194120 status.go:384] host is not running, skipping remaining checks
	I0510 18:26:46.919248 1194120 status.go:176] ha-112386-m04 status: &{Name:ha-112386-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E0510 18:28:07.055594 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (1m40.787085889s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 node add --control-plane --alsologtostderr -v 5
E0510 18:28:35.208908 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-112386 node add --control-plane --alsologtostderr -v 5: (1m18.091025574s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-112386 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-328727 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0510 18:29:58.273746 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:31:10.132360 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-328727 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m27.749567704s)
--- PASS: TestJSONOutput/start/Command (87.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-328727 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-328727 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.58s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-328727 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-328727 --output=json --user=testUser: (6.580092694s)
--- PASS: TestJSONOutput/stop/Command (6.58s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-594709 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-594709 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.614913ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b34934e7-db14-4a58-876f-730b920b2b1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-594709] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ca3d28b-af0b-4ece-b06a-1c214b237d0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20720"}}
	{"specversion":"1.0","id":"32d0c4c5-2050-4716-b7dc-31f111ad141b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0c82cfc5-30bc-48c7-aa3a-bf649ab74b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig"}}
	{"specversion":"1.0","id":"826286db-62b8-4952-a794-2191e75e6ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube"}}
	{"specversion":"1.0","id":"2f15640b-ad1b-47e7-abf1-7fdb969c886d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d2ad8526-5a03-43c4-a80b-69716118aba5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ce9c1b36-a730-44d5-8328-d85ac2bbfb08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-594709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-594709
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-427425 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-427425 --driver=kvm2  --container-runtime=containerd: (48.61078726s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-438855 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-438855 --driver=kvm2  --container-runtime=containerd: (41.818635016s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-427425
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-438855
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-438855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-438855
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-438855: (1.020649915s)
helpers_test.go:175: Cleaning up "first-427425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-427425
--- PASS: TestMinikubeProfile (93.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-897592 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0510 18:33:07.056446 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-897592 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.286358889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-897592 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-897592 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-912548 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0510 18:33:35.208368 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-912548 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.024512646s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-912548 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-912548 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-897592 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-912548 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-912548 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-912548
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-912548: (1.285965802s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-912548
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-912548: (23.047799906s)
--- PASS: TestMountStart/serial/RestartStopped (24.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-912548 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-912548 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451691 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451691 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m53.625931514s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-451691 -- rollout status deployment/busybox: (3.600466976s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-6dtsb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-s2ktr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-6dtsb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-s2ktr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-6dtsb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-s2ktr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-6dtsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-6dtsb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-s2ktr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-451691 -- exec busybox-58667487b6-s2ktr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-451691 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-451691 -v=5 --alsologtostderr: (49.649666528s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-451691 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp testdata/cp-test.txt multinode-451691:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1984866409/001/cp-test_multinode-451691.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691:/home/docker/cp-test.txt multinode-451691-m02:/home/docker/cp-test_multinode-451691_multinode-451691-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m02 "sudo cat /home/docker/cp-test_multinode-451691_multinode-451691-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691:/home/docker/cp-test.txt multinode-451691-m03:/home/docker/cp-test_multinode-451691_multinode-451691-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m03 "sudo cat /home/docker/cp-test_multinode-451691_multinode-451691-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp testdata/cp-test.txt multinode-451691-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1984866409/001/cp-test_multinode-451691-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691-m02:/home/docker/cp-test.txt multinode-451691:/home/docker/cp-test_multinode-451691-m02_multinode-451691.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691 "sudo cat /home/docker/cp-test_multinode-451691-m02_multinode-451691.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691-m02:/home/docker/cp-test.txt multinode-451691-m03:/home/docker/cp-test_multinode-451691-m02_multinode-451691-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m03 "sudo cat /home/docker/cp-test_multinode-451691-m02_multinode-451691-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp testdata/cp-test.txt multinode-451691-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1984866409/001/cp-test_multinode-451691-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691-m03:/home/docker/cp-test.txt multinode-451691:/home/docker/cp-test_multinode-451691-m03_multinode-451691.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691 "sudo cat /home/docker/cp-test_multinode-451691-m03_multinode-451691.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 cp multinode-451691-m03:/home/docker/cp-test.txt multinode-451691-m02:/home/docker/cp-test_multinode-451691-m03_multinode-451691-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 ssh -n multinode-451691-m02 "sudo cat /home/docker/cp-test_multinode-451691-m03_multinode-451691-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-451691 node stop m03: (1.392443056s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451691 status: exit status 7 (433.506987ms)

                                                
                                                
-- stdout --
	multinode-451691
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-451691-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-451691-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr: exit status 7 (425.570313ms)

                                                
                                                
-- stdout --
	multinode-451691
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-451691-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-451691-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:37:28.167299 1201947 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:37:28.167536 1201947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:37:28.167545 1201947 out.go:358] Setting ErrFile to fd 2...
	I0510 18:37:28.167549 1201947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:37:28.167726 1201947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 18:37:28.167869 1201947 out.go:352] Setting JSON to false
	I0510 18:37:28.167902 1201947 mustload.go:65] Loading cluster: multinode-451691
	I0510 18:37:28.167965 1201947 notify.go:220] Checking for updates...
	I0510 18:37:28.168321 1201947 config.go:182] Loaded profile config "multinode-451691": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 18:37:28.168345 1201947 status.go:174] checking status of multinode-451691 ...
	I0510 18:37:28.168806 1201947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:37:28.168850 1201947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:37:28.185160 1201947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
	I0510 18:37:28.185607 1201947 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:37:28.186154 1201947 main.go:141] libmachine: Using API Version  1
	I0510 18:37:28.186180 1201947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:37:28.186598 1201947 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:37:28.186797 1201947 main.go:141] libmachine: (multinode-451691) Calling .GetState
	I0510 18:37:28.188274 1201947 status.go:371] multinode-451691 host status = "Running" (err=<nil>)
	I0510 18:37:28.188296 1201947 host.go:66] Checking if "multinode-451691" exists ...
	I0510 18:37:28.188592 1201947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:37:28.188638 1201947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:37:28.203915 1201947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0510 18:37:28.204365 1201947 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:37:28.204810 1201947 main.go:141] libmachine: Using API Version  1
	I0510 18:37:28.204832 1201947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:37:28.205140 1201947 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:37:28.205356 1201947 main.go:141] libmachine: (multinode-451691) Calling .GetIP
	I0510 18:37:28.208003 1201947 main.go:141] libmachine: (multinode-451691) DBG | domain multinode-451691 has defined MAC address 52:54:00:4c:a3:df in network mk-multinode-451691
	I0510 18:37:28.208386 1201947 main.go:141] libmachine: (multinode-451691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:a3:df", ip: ""} in network mk-multinode-451691: {Iface:virbr1 ExpiryTime:2025-05-10 19:34:42 +0000 UTC Type:0 Mac:52:54:00:4c:a3:df Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:multinode-451691 Clientid:01:52:54:00:4c:a3:df}
	I0510 18:37:28.208417 1201947 main.go:141] libmachine: (multinode-451691) DBG | domain multinode-451691 has defined IP address 192.168.39.132 and MAC address 52:54:00:4c:a3:df in network mk-multinode-451691
	I0510 18:37:28.208538 1201947 host.go:66] Checking if "multinode-451691" exists ...
	I0510 18:37:28.208818 1201947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:37:28.208853 1201947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:37:28.224202 1201947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I0510 18:37:28.224618 1201947 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:37:28.225012 1201947 main.go:141] libmachine: Using API Version  1
	I0510 18:37:28.225034 1201947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:37:28.225396 1201947 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:37:28.225588 1201947 main.go:141] libmachine: (multinode-451691) Calling .DriverName
	I0510 18:37:28.225815 1201947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:37:28.225845 1201947 main.go:141] libmachine: (multinode-451691) Calling .GetSSHHostname
	I0510 18:37:28.228799 1201947 main.go:141] libmachine: (multinode-451691) DBG | domain multinode-451691 has defined MAC address 52:54:00:4c:a3:df in network mk-multinode-451691
	I0510 18:37:28.229227 1201947 main.go:141] libmachine: (multinode-451691) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:a3:df", ip: ""} in network mk-multinode-451691: {Iface:virbr1 ExpiryTime:2025-05-10 19:34:42 +0000 UTC Type:0 Mac:52:54:00:4c:a3:df Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:multinode-451691 Clientid:01:52:54:00:4c:a3:df}
	I0510 18:37:28.229267 1201947 main.go:141] libmachine: (multinode-451691) DBG | domain multinode-451691 has defined IP address 192.168.39.132 and MAC address 52:54:00:4c:a3:df in network mk-multinode-451691
	I0510 18:37:28.229397 1201947 main.go:141] libmachine: (multinode-451691) Calling .GetSSHPort
	I0510 18:37:28.229581 1201947 main.go:141] libmachine: (multinode-451691) Calling .GetSSHKeyPath
	I0510 18:37:28.229707 1201947 main.go:141] libmachine: (multinode-451691) Calling .GetSSHUsername
	I0510 18:37:28.229835 1201947 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/multinode-451691/id_rsa Username:docker}
	I0510 18:37:28.311093 1201947 ssh_runner.go:195] Run: systemctl --version
	I0510 18:37:28.316339 1201947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:37:28.331792 1201947 kubeconfig.go:125] found "multinode-451691" server: "https://192.168.39.132:8443"
	I0510 18:37:28.331840 1201947 api_server.go:166] Checking apiserver status ...
	I0510 18:37:28.331883 1201947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 18:37:28.349723 1201947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	W0510 18:37:28.360347 1201947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0510 18:37:28.360453 1201947 ssh_runner.go:195] Run: ls
	I0510 18:37:28.364801 1201947 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
	I0510 18:37:28.369426 1201947 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
	ok
	I0510 18:37:28.369451 1201947 status.go:463] multinode-451691 apiserver status = Running (err=<nil>)
	I0510 18:37:28.369465 1201947 status.go:176] multinode-451691 status: &{Name:multinode-451691 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:37:28.369489 1201947 status.go:174] checking status of multinode-451691-m02 ...
	I0510 18:37:28.369784 1201947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:37:28.369815 1201947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:37:28.385637 1201947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0510 18:37:28.386090 1201947 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:37:28.386522 1201947 main.go:141] libmachine: Using API Version  1
	I0510 18:37:28.386545 1201947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:37:28.386919 1201947 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:37:28.387088 1201947 main.go:141] libmachine: (multinode-451691-m02) Calling .GetState
	I0510 18:37:28.388607 1201947 status.go:371] multinode-451691-m02 host status = "Running" (err=<nil>)
	I0510 18:37:28.388625 1201947 host.go:66] Checking if "multinode-451691-m02" exists ...
	I0510 18:37:28.388914 1201947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:37:28.388950 1201947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:37:28.404526 1201947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0510 18:37:28.404943 1201947 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:37:28.405431 1201947 main.go:141] libmachine: Using API Version  1
	I0510 18:37:28.405461 1201947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:37:28.405799 1201947 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:37:28.406011 1201947 main.go:141] libmachine: (multinode-451691-m02) Calling .GetIP
	I0510 18:37:28.408550 1201947 main.go:141] libmachine: (multinode-451691-m02) DBG | domain multinode-451691-m02 has defined MAC address 52:54:00:2c:a1:3b in network mk-multinode-451691
	I0510 18:37:28.408951 1201947 main.go:141] libmachine: (multinode-451691-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:a1:3b", ip: ""} in network mk-multinode-451691: {Iface:virbr1 ExpiryTime:2025-05-10 19:35:45 +0000 UTC Type:0 Mac:52:54:00:2c:a1:3b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:multinode-451691-m02 Clientid:01:52:54:00:2c:a1:3b}
	I0510 18:37:28.408979 1201947 main.go:141] libmachine: (multinode-451691-m02) DBG | domain multinode-451691-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:2c:a1:3b in network mk-multinode-451691
	I0510 18:37:28.409145 1201947 host.go:66] Checking if "multinode-451691-m02" exists ...
	I0510 18:37:28.409479 1201947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:37:28.409527 1201947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:37:28.425388 1201947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I0510 18:37:28.425934 1201947 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:37:28.426416 1201947 main.go:141] libmachine: Using API Version  1
	I0510 18:37:28.426447 1201947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:37:28.426768 1201947 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:37:28.426960 1201947 main.go:141] libmachine: (multinode-451691-m02) Calling .DriverName
	I0510 18:37:28.427156 1201947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 18:37:28.427181 1201947 main.go:141] libmachine: (multinode-451691-m02) Calling .GetSSHHostname
	I0510 18:37:28.429916 1201947 main.go:141] libmachine: (multinode-451691-m02) DBG | domain multinode-451691-m02 has defined MAC address 52:54:00:2c:a1:3b in network mk-multinode-451691
	I0510 18:37:28.430280 1201947 main.go:141] libmachine: (multinode-451691-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:a1:3b", ip: ""} in network mk-multinode-451691: {Iface:virbr1 ExpiryTime:2025-05-10 19:35:45 +0000 UTC Type:0 Mac:52:54:00:2c:a1:3b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:multinode-451691-m02 Clientid:01:52:54:00:2c:a1:3b}
	I0510 18:37:28.430325 1201947 main.go:141] libmachine: (multinode-451691-m02) DBG | domain multinode-451691-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:2c:a1:3b in network mk-multinode-451691
	I0510 18:37:28.430454 1201947 main.go:141] libmachine: (multinode-451691-m02) Calling .GetSSHPort
	I0510 18:37:28.430624 1201947 main.go:141] libmachine: (multinode-451691-m02) Calling .GetSSHKeyPath
	I0510 18:37:28.430758 1201947 main.go:141] libmachine: (multinode-451691-m02) Calling .GetSSHUsername
	I0510 18:37:28.430871 1201947 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20720-1165049/.minikube/machines/multinode-451691-m02/id_rsa Username:docker}
	I0510 18:37:28.510927 1201947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 18:37:28.525383 1201947 status.go:176] multinode-451691-m02 status: &{Name:multinode-451691-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:37:28.525436 1201947 status.go:174] checking status of multinode-451691-m03 ...
	I0510 18:37:28.525792 1201947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:37:28.525825 1201947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:37:28.541539 1201947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0510 18:37:28.541986 1201947 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:37:28.542435 1201947 main.go:141] libmachine: Using API Version  1
	I0510 18:37:28.542456 1201947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:37:28.542786 1201947 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:37:28.542975 1201947 main.go:141] libmachine: (multinode-451691-m03) Calling .GetState
	I0510 18:37:28.544447 1201947 status.go:371] multinode-451691-m03 host status = "Stopped" (err=<nil>)
	I0510 18:37:28.544462 1201947 status.go:384] host is not running, skipping remaining checks
	I0510 18:37:28.544469 1201947 status.go:176] multinode-451691-m03 status: &{Name:multinode-451691-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-451691 node start m03 -v=5 --alsologtostderr: (32.707906121s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-451691
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-451691
E0510 18:38:07.057022 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:38:35.215616 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-451691: (3m3.304082858s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451691 --wait=true -v=5 --alsologtostderr
E0510 18:43:07.055489 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451691 --wait=true -v=5 --alsologtostderr: (2m10.216903528s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-451691
--- PASS: TestMultiNode/serial/RestartKeepsNodes (313.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-451691 node delete m03: (1.678634932s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 stop
E0510 18:43:35.215740 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-451691 stop: (3m1.928977807s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451691 status: exit status 7 (89.769649ms)

                                                
                                                
-- stdout --
	multinode-451691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-451691-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr: exit status 7 (86.984362ms)

                                                
                                                
-- stdout --
	multinode-451691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-451691-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:46:19.814659 1204636 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:46:19.814937 1204636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:46:19.814949 1204636 out.go:358] Setting ErrFile to fd 2...
	I0510 18:46:19.814953 1204636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:46:19.815160 1204636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 18:46:19.815326 1204636 out.go:352] Setting JSON to false
	I0510 18:46:19.815366 1204636 mustload.go:65] Loading cluster: multinode-451691
	I0510 18:46:19.815427 1204636 notify.go:220] Checking for updates...
	I0510 18:46:19.815799 1204636 config.go:182] Loaded profile config "multinode-451691": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 18:46:19.815826 1204636 status.go:174] checking status of multinode-451691 ...
	I0510 18:46:19.816329 1204636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:46:19.816372 1204636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:46:19.831664 1204636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0510 18:46:19.832099 1204636 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:46:19.832757 1204636 main.go:141] libmachine: Using API Version  1
	I0510 18:46:19.832783 1204636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:46:19.833201 1204636 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:46:19.833435 1204636 main.go:141] libmachine: (multinode-451691) Calling .GetState
	I0510 18:46:19.835220 1204636 status.go:371] multinode-451691 host status = "Stopped" (err=<nil>)
	I0510 18:46:19.835238 1204636 status.go:384] host is not running, skipping remaining checks
	I0510 18:46:19.835245 1204636 status.go:176] multinode-451691 status: &{Name:multinode-451691 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 18:46:19.835299 1204636 status.go:174] checking status of multinode-451691-m02 ...
	I0510 18:46:19.835666 1204636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0510 18:46:19.835724 1204636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0510 18:46:19.850871 1204636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0510 18:46:19.851355 1204636 main.go:141] libmachine: () Calling .GetVersion
	I0510 18:46:19.851872 1204636 main.go:141] libmachine: Using API Version  1
	I0510 18:46:19.851895 1204636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0510 18:46:19.852260 1204636 main.go:141] libmachine: () Calling .GetMachineName
	I0510 18:46:19.852447 1204636 main.go:141] libmachine: (multinode-451691-m02) Calling .GetState
	I0510 18:46:19.853891 1204636 status.go:371] multinode-451691-m02 host status = "Stopped" (err=<nil>)
	I0510 18:46:19.853905 1204636 status.go:384] host is not running, skipping remaining checks
	I0510 18:46:19.853911 1204636 status.go:176] multinode-451691-m02 status: &{Name:multinode-451691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451691 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0510 18:46:38.275498 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451691 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m26.24839737s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-451691 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-451691
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451691-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-451691-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (69.218653ms)

                                                
                                                
-- stdout --
	* [multinode-451691-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-451691-m02' is duplicated with machine name 'multinode-451691-m02' in profile 'multinode-451691'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-451691-m03 --driver=kvm2  --container-runtime=containerd
E0510 18:47:50.136051 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:48:07.056247 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-451691-m03 --driver=kvm2  --container-runtime=containerd: (45.521759217s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-451691
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-451691: exit status 80 (219.723502ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-451691 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-451691-m03 already exists in multinode-451691-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-451691-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.66s)

                                                
                                    
x
+
TestPreload (271.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-978051 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0510 18:48:35.208068 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-978051 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m58.925291818s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-978051 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-978051 image pull gcr.io/k8s-minikube/busybox: (2.420312328s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-978051
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-978051: (1m30.999037297s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-978051 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-978051 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (57.878992259s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-978051 image list
helpers_test.go:175: Cleaning up "test-preload-978051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-978051
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-978051: (1.071328993s)
--- PASS: TestPreload (271.51s)

                                                
                                    
x
+
TestScheduledStopUnix (117.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-567740 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0510 18:53:07.055584 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:53:35.210405 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-567740 --memory=2048 --driver=kvm2  --container-runtime=containerd: (45.782513675s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567740 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-567740 -n scheduled-stop-567740
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567740 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0510 18:53:52.769445 1172304 retry.go:31] will retry after 107.48µs: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.770617 1172304 retry.go:31] will retry after 175.827µs: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.771771 1172304 retry.go:31] will retry after 294.681µs: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.772916 1172304 retry.go:31] will retry after 337.139µs: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.774048 1172304 retry.go:31] will retry after 388.57µs: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.775196 1172304 retry.go:31] will retry after 772.075µs: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.776340 1172304 retry.go:31] will retry after 936.878µs: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.777487 1172304 retry.go:31] will retry after 2.551572ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.780707 1172304 retry.go:31] will retry after 2.847845ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.783958 1172304 retry.go:31] will retry after 4.362725ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.789176 1172304 retry.go:31] will retry after 7.125528ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.796577 1172304 retry.go:31] will retry after 4.80143ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.801947 1172304 retry.go:31] will retry after 10.362471ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.813205 1172304 retry.go:31] will retry after 26.407425ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
I0510 18:53:52.840525 1172304 retry.go:31] will retry after 43.600339ms: open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/scheduled-stop-567740/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567740 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-567740 -n scheduled-stop-567740
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-567740
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567740 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-567740
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-567740: exit status 7 (72.974901ms)

                                                
                                                
-- stdout --
	scheduled-stop-567740
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-567740 -n scheduled-stop-567740
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-567740 -n scheduled-stop-567740: exit status 7 (69.880269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-567740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-567740
--- PASS: TestScheduledStopUnix (117.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (207.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2211752824 start -p running-upgrade-890047 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2211752824 start -p running-upgrade-890047 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m8.35477586s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-890047 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-890047 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m13.958547258s)
helpers_test.go:175: Cleaning up "running-upgrade-890047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-890047
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-890047: (3.159628835s)
--- PASS: TestRunningBinaryUpgrade (207.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (214.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m35.736791648s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-346615
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-346615: (2.382780344s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-346615 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-346615 status --format={{.Host}}: exit status 7 (118.484994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0510 18:58:07.055529 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (51.009368918s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-346615 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (94.93337ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-346615] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.33.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-346615
	    minikube start -p kubernetes-upgrade-346615 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3466152 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.33.0, by running:
	    
	    minikube start -p kubernetes-upgrade-346615 --kubernetes-version=v1.33.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-346615 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m4.596819679s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-346615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-346615
--- PASS: TestKubernetesUpgrade (214.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-880115 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-880115 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (87.275619ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-880115] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-880115 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-880115 --driver=kvm2  --container-runtime=containerd: (1m38.508282073s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-880115 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-742615 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-742615 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (404.237431ms)

                                                
                                                
-- stdout --
	* [false-742615] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 18:55:56.671059 1209897 out.go:345] Setting OutFile to fd 1 ...
	I0510 18:55:56.671516 1209897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:55:56.671551 1209897 out.go:358] Setting ErrFile to fd 2...
	I0510 18:55:56.671567 1209897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 18:55:56.672475 1209897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-1165049/.minikube/bin
	I0510 18:55:56.673278 1209897 out.go:352] Setting JSON to false
	I0510 18:55:56.674331 1209897 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":23901,"bootTime":1746879456,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 18:55:56.674420 1209897 start.go:140] virtualization: kvm guest
	I0510 18:55:56.676987 1209897 out.go:177] * [false-742615] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 18:55:56.678621 1209897 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 18:55:56.678654 1209897 notify.go:220] Checking for updates...
	I0510 18:55:56.681865 1209897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 18:55:56.683570 1209897 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-1165049/kubeconfig
	I0510 18:55:56.685046 1209897 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-1165049/.minikube
	I0510 18:55:56.686449 1209897 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 18:55:56.688393 1209897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 18:55:56.691550 1209897 config.go:182] Loaded profile config "NoKubernetes-880115": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 18:55:56.691696 1209897 config.go:182] Loaded profile config "offline-containerd-866299": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
	I0510 18:55:56.691803 1209897 config.go:182] Loaded profile config "running-upgrade-890047": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0510 18:55:56.691943 1209897 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 18:55:57.021329 1209897 out.go:177] * Using the kvm2 driver based on user configuration
	I0510 18:55:57.022667 1209897 start.go:304] selected driver: kvm2
	I0510 18:55:57.022682 1209897 start.go:908] validating driver "kvm2" against <nil>
	I0510 18:55:57.022695 1209897 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 18:55:57.024625 1209897 out.go:201] 
	W0510 18:55:57.025880 1209897 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0510 18:55:57.026952 1209897 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-742615 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-742615" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-742615

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-742615"

                                                
                                                
----------------------- debugLogs end: false-742615 [took: 3.548896336s] --------------------------------
helpers_test.go:175: Cleaning up "false-742615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-742615
--- PASS: TestNetworkPlugins/group/false (4.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (54.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-880115 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-880115 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (52.895567959s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-880115 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-880115 status -o json: exit status 2 (355.236749ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-880115","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-880115
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-880115: (1.20794189s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (54.46s)

                                                
                                    
x
+
TestPause/serial/Start (99.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-575980 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-575980 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m39.254635918s)
--- PASS: TestPause/serial/Start (99.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-880115 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-880115 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.689948155s)
--- PASS: TestNoKubernetes/serial/Start (29.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-880115 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-880115 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.712311ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (18.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.400496608s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.594106869s)
--- PASS: TestNoKubernetes/serial/ProfileList (18.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-880115
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-880115: (1.652040934s)
--- PASS: TestNoKubernetes/serial/Stop (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-880115 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-880115 --driver=kvm2  --container-runtime=containerd: (25.788246601s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (83.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-575980 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-575980 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m23.06346093s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (83.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-880115 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-880115 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.42838ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2553378978 start -p stopped-upgrade-644579 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2553378978 start -p stopped-upgrade-644579 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (55.109248481s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2553378978 -p stopped-upgrade-644579 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2553378978 -p stopped-upgrade-644579 stop: (1.528397094s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-644579 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-644579 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m2.091710873s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.73s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-575980 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-575980 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-575980 --output=json --layout=cluster: exit status 2 (277.389684ms)

                                                
                                                
-- stdout --
	{"Name":"pause-575980","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-575980","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-575980 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-575980 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-575980 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (117.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m57.462422112s)
--- PASS: TestNetworkPlugins/group/auto/Start (117.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m26.330121075s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-644579
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (59.664178418s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-742615 "pgrep -a kubelet"
I0510 19:02:08.357514 1172304 config.go:182] Loaded profile config "auto-742615": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-742615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m4cqt" [63d55b64-4c39-46f1-bf62-673e75454723] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-m4cqt" [63d55b64-4c39-46f1-bf62-673e75454723] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004736351s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-742615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-p5wws" [e1801d74-a5d7-46c2-8b58-49c31a6b8a46] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005167801s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m18.918937097s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-742615 "pgrep -a kubelet"
I0510 19:02:35.826341 1172304 config.go:182] Loaded profile config "kindnet-742615": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-742615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wh4dt" [ba360e77-84e0-4206-9ff4-21b092b9088e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wh4dt" [ba360e77-84e0-4206-9ff4-21b092b9088e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.254617802s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-742615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-742615 "pgrep -a kubelet"
I0510 19:02:49.137452 1172304 config.go:182] Loaded profile config "enable-default-cni-742615": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-742615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jcbns" [121c0d43-c6e4-413e-bce5-1a9cf411a1b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jcbns" [121c0d43-c6e4-413e-bce5-1a9cf411a1b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004480438s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-742615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0510 19:03:07.055627 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m23.06853415s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0510 19:03:18.277526 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:03:35.208712 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m35.680790666s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-742615 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m20.321717342s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9x2fk" [2c2abf8f-913f-467d-933a-d9a8fdda91a5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.010032714s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-742615 "pgrep -a kubelet"
I0510 19:03:59.281304 1172304 config.go:182] Loaded profile config "calico-742615": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-742615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5r87c" [33a810a2-2de6-4ed1-a168-6849196d4d81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5r87c" [33a810a2-2de6-4ed1-a168-6849196d4d81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005745822s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-742615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zwhkp" [8a0c84ab-817e-482d-880e-cdc5bfb265e4] Running
E0510 19:04:30.137744 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004123687s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-742615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-742615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6kz7z" [a2e3ff0d-8846-4f10-8e54-d10bbe514d71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6kz7z" [a2e3ff0d-8846-4f10-8e54-d10bbe514d71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004287018s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (148.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-174330 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-174330 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m28.707385577s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (148.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-742615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-742615 "pgrep -a kubelet"
I0510 19:04:52.766685 1172304 config.go:182] Loaded profile config "custom-flannel-742615": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-742615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ws968" [48060b16-029d-4226-9153-fba3272a814a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ws968" [48060b16-029d-4226-9153-fba3272a814a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004762043s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-742615 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (106.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-860982 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
I0510 19:04:57.348105 1172304 config.go:182] Loaded profile config "bridge-742615": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-860982 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (1m46.232530165s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (106.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-742615 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-24jfw" [dac17d04-9371-4396-addc-072387bfda39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-24jfw" [dac17d04-9371-4396-addc-072387bfda39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003801412s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-742615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-742615 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-742615 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0510 19:09:44.944351 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:52.440098 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.004857 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.011327 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.022746 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.044254 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.085778 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.167557 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.329140 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:53.650961 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:54.292809 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:55.574870 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:57.584355 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:57.590835 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:57.602366 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:57.623852 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:57.665219 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:57.746799 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:57.908389 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:58.137228 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:58.230248 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:58.871869 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:10:00.154283 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:10:02.716333 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:10:03.259349 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:10:05.426746 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:10:07.838598 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070258 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070258 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (1m40.477434723s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (112.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-275316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-275316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (1m52.892574923s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (112.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-860982 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e10615ca-a3a5-4e8f-b55c-fcbcf2ce821e] Pending
helpers_test.go:344: "busybox" [e10615ca-a3a5-4e8f-b55c-fcbcf2ce821e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e10615ca-a3a5-4e8f-b55c-fcbcf2ce821e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004372077s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-860982 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-860982 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-860982 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-860982 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-860982 --alsologtostderr -v=3: (1m30.831502079s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-070258 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [095d8412-61a8-411e-8ecc-66fd601a5cd7] Pending
helpers_test.go:344: "busybox" [095d8412-61a8-411e-8ecc-66fd601a5cd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [095d8412-61a8-411e-8ecc-66fd601a5cd7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003255781s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-070258 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-174330 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5adf3e75-0967-4ebb-aaa8-4434f13b4ca7] Pending
helpers_test.go:344: "busybox" [5adf3e75-0967-4ebb-aaa8-4434f13b4ca7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5adf3e75-0967-4ebb-aaa8-4434f13b4ca7] Running
E0510 19:07:08.576984 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:08.583399 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:08.594802 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:08.616242 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:08.658496 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:08.739985 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:08.901588 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:09.223166 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003851767s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-174330 exec busybox -- /bin/sh -c "ulimit -n"
E0510 19:07:11.147609 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-070258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0510 19:07:09.865422 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-070258 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-070258 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-070258 --alsologtostderr -v=3: (1m31.164177018s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-174330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-174330 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-174330 --alsologtostderr -v=3
E0510 19:07:13.709576 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-174330 --alsologtostderr -v=3: (1m31.638150863s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-275316 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6241fff7-8a3c-46a9-bc6f-b7f05678b806] Pending
E0510 19:07:18.831714 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [6241fff7-8a3c-46a9-bc6f-b7f05678b806] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6241fff7-8a3c-46a9-bc6f-b7f05678b806] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004376936s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-275316 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-275316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-275316 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-275316 --alsologtostderr -v=3
E0510 19:07:29.073602 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:29.592625 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:29.599006 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:29.610451 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:29.631880 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:29.673359 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:29.754874 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:29.916458 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:30.238356 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:30.879825 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:32.161797 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:34.723455 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:39.845795 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.350796 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.357251 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.368696 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.390209 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.431930 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.513591 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.555659 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.675386 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:49.997171 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:50.087834 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:50.639434 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:51.921397 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:54.483525 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:07:59.605503 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:07.056330 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/addons-661496/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:09.847640 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:10.569926 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-275316 --alsologtostderr -v=3: (1m31.325947303s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-860982 -n no-preload-860982
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-860982 -n no-preload-860982: exit status 7 (105.583668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-860982 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-860982 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
E0510 19:08:30.329677 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:30.518029 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/auto-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:35.208459 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/functional-691821/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-860982 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (43.826804233s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-860982 -n no-preload-860982
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070258 -n embed-certs-070258
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070258 -n embed-certs-070258: exit status 7 (77.839541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-070258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070258 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070258 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (44.698555051s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070258 -n embed-certs-070258
E0510 19:09:27.018666 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-174330 -n old-k8s-version-174330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-174330 -n old-k8s-version-174330: exit status 7 (81.896785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-174330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (149.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-174330 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0510 19:08:51.531611 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:52.968389 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:52.974874 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:52.986368 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:53.008023 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:53.049691 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:53.131306 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:53.292928 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:53.615075 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:54.257337 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:55.539815 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:08:58.101533 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-174330 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m29.423989545s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-174330 -n old-k8s-version-174330
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (149.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316: exit status 7 (72.773638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-275316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-275316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
E0510 19:09:03.223649 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-275316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (1m13.14476282s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lcq5f" [3ba933f0-f57c-406e-aec9-4f09495aba73] Running
E0510 19:09:11.291842 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/enable-default-cni-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:13.465489 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004720945s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-lcq5f" [3ba933f0-f57c-406e-aec9-4f09495aba73] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004351504s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-860982 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-860982 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-860982 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-860982 -n no-preload-860982
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-860982 -n no-preload-860982: exit status 2 (275.634596ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-860982 -n no-preload-860982
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-860982 -n no-preload-860982: exit status 2 (277.03416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-860982 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-860982 -n no-preload-860982
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-860982 -n no-preload-860982
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (1m12.420846364s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7ph4w" [fa5787aa-13f4-459f-8d6e-8b5b5728ef89] Running
E0510 19:09:29.580659 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004640874s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-7ph4w" [fa5787aa-13f4-459f-8d6e-8b5b5728ef89] Running
E0510 19:09:33.946857 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:09:34.702677 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004268015s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-070258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-070258 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-070258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070258 -n embed-certs-070258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070258 -n embed-certs-070258: exit status 2 (254.645864ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070258 -n embed-certs-070258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070258 -n embed-certs-070258: exit status 2 (244.338212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-070258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070258 -n embed-certs-070258
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070258 -n embed-certs-070258
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-sjtd5" [e6fa6fe4-4714-4f72-83f0-96b7a59ca830] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0510 19:10:13.453957 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/kindnet-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:10:13.501470 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
E0510 19:10:14.909070 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/calico-742615/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-sjtd5" [e6fa6fe4-4714-4f72-83f0-96b7a59ca830] Running
E0510 19:10:18.080000 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003844316s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-sjtd5" [e6fa6fe4-4714-4f72-83f0-96b7a59ca830] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004119927s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-275316 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-275316 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-275316 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316: exit status 2 (273.885218ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316: exit status 2 (272.08719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-275316 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-275316 -n default-k8s-diff-port-275316
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0510 19:10:38.561993 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/bridge-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011269596s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-268517 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-268517 --alsologtostderr -v=3: (2.315053422s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268517 -n newest-cni-268517
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268517 -n newest-cni-268517: exit status 7 (69.216397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-268517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0
E0510 19:10:46.388616 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.33.0: (33.311300485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268517 -n newest-cni-268517
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-knpv4" [67888c37-f0ed-413c-856d-9af0df68dd5f] Running
E0510 19:11:14.945381 1172304 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-1165049/.minikube/profiles/custom-flannel-742615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003160816s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-268517 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-268517 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268517 -n newest-cni-268517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268517 -n newest-cni-268517: exit status 2 (254.210902ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268517 -n newest-cni-268517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268517 -n newest-cni-268517: exit status 2 (251.903665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-268517 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268517 -n newest-cni-268517
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268517 -n newest-cni-268517
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-knpv4" [67888c37-f0ed-413c-856d-9af0df68dd5f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003595492s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-174330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-174330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-174330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-174330 -n old-k8s-version-174330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-174330 -n old-k8s-version-174330: exit status 2 (244.084947ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-174330 -n old-k8s-version-174330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-174330 -n old-k8s-version-174330: exit status 2 (247.331012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-174330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-174330 -n old-k8s-version-174330
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-174330 -n old-k8s-version-174330
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                    

Test skip (39/329)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.33.0/cached-images 0
15 TestDownloadOnly/v1.33.0/binaries 0
16 TestDownloadOnly/v1.33.0/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.12
266 TestNetworkPlugins/group/cilium 5.37
272 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.33.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.33.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.33.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-742615 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-742615" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-742615

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-742615"

                                                
                                                
----------------------- debugLogs end: kubenet-742615 [took: 2.95780344s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-742615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-742615
--- SKIP: TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-742615 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-742615" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-742615

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-742615" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-742615"

                                                
                                                
----------------------- debugLogs end: cilium-742615 [took: 5.223723326s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-742615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-742615
--- SKIP: TestNetworkPlugins/group/cilium (5.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-644290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-644290
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard